00:00:00.000 Started by upstream project "autotest-per-patch" build number 132298 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.145 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.205 Using shallow fetch with depth 1 00:00:00.205 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.205 > git --version # timeout=10 00:00:00.258 > git --version # 'git version 2.39.2' 00:00:00.258 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.288 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.288 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.298 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.309 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.322 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.322 > git config core.sparsecheckout # timeout=10 00:00:06.337 > git read-tree -mu HEAD # timeout=10 00:00:06.352 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.367 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.368 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.447 [Pipeline] Start of Pipeline 00:00:06.459 [Pipeline] library 00:00:06.461 Loading library shm_lib@master 00:00:06.461 Library shm_lib@master is cached. Copying from home. 00:00:06.474 [Pipeline] node 00:00:06.484 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.485 [Pipeline] { 00:00:06.492 [Pipeline] catchError 00:00:06.493 [Pipeline] { 00:00:06.501 [Pipeline] wrap 00:00:06.508 [Pipeline] { 00:00:06.514 [Pipeline] stage 00:00:06.516 [Pipeline] { (Prologue) 00:00:06.532 [Pipeline] echo 00:00:06.534 Node: VM-host-SM17 00:00:06.540 [Pipeline] cleanWs 00:00:06.548 [WS-CLEANUP] Deleting project workspace... 00:00:06.548 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.554 [WS-CLEANUP] done 00:00:06.749 [Pipeline] setCustomBuildProperty 00:00:06.852 [Pipeline] httpRequest 00:00:09.874 [Pipeline] echo 00:00:09.876 Sorcerer 10.211.164.101 is dead 00:00:09.887 [Pipeline] httpRequest 00:00:12.906 [Pipeline] echo 00:00:12.909 Sorcerer 10.211.164.101 is dead 00:00:12.917 [Pipeline] httpRequest 00:00:12.973 [Pipeline] echo 00:00:12.975 Sorcerer 10.211.164.96 is dead 00:00:12.984 [Pipeline] httpRequest 00:00:13.291 [Pipeline] echo 00:00:13.293 Sorcerer 10.211.164.20 is alive 00:00:13.305 [Pipeline] retry 00:00:13.308 [Pipeline] { 00:00:13.322 [Pipeline] httpRequest 00:00:13.327 HttpMethod: GET 00:00:13.327 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:13.328 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:13.329 Response Code: HTTP/1.1 200 OK 00:00:13.329 Success: Status code 200 is in the accepted range: 200,404 00:00:13.330 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:13.475 [Pipeline] } 00:00:13.493 [Pipeline] // retry 00:00:13.502 [Pipeline] sh 00:00:13.783 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:13.797 [Pipeline] httpRequest 00:00:14.112 [Pipeline] echo 00:00:14.114 Sorcerer 10.211.164.20 is alive 00:00:14.123 [Pipeline] retry 00:00:14.124 [Pipeline] { 00:00:14.139 [Pipeline] httpRequest 00:00:14.144 HttpMethod: GET 00:00:14.144 URL: http://10.211.164.20/packages/spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz 00:00:14.145 Sending request to url: http://10.211.164.20/packages/spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz 00:00:14.146 Response Code: HTTP/1.1 404 Not Found 00:00:14.147 Success: Status code 404 is in the accepted range: 200,404 00:00:14.148 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz 00:00:14.152 [Pipeline] } 00:00:14.169 [Pipeline] // retry 00:00:14.177 [Pipeline] sh 00:00:14.456 + rm -f spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz 00:00:14.468 [Pipeline] retry 00:00:14.469 [Pipeline] { 00:00:14.485 [Pipeline] checkout 00:00:14.495 The recommended git tool is: NONE 00:00:14.507 using credential 00000000-0000-0000-0000-000000000002 00:00:14.509 Wiping out workspace first. 00:00:14.517 Cloning the remote Git repository 00:00:14.521 Honoring refspec on initial clone 00:00:14.522 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:14.523 > git init /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk # timeout=10 00:00:14.530 Using reference repository: /var/ci_repos/spdk_multi 00:00:14.530 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:14.530 > git --version # timeout=10 00:00:14.533 > git --version # 'git version 2.25.1' 00:00:14.533 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:14.536 Setting http proxy: proxy-dmz.intel.com:911 00:00:14.537 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/12/25212/7 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:23.974 Avoid second fetch 00:00:23.990 Checking out Revision f1a181ac34ff9dc22c85383e5b547bbedfdae1bf (FETCH_HEAD) 00:00:23.955 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:23.959 > git config --add remote.origin.fetch refs/changes/12/25212/7 # timeout=10 00:00:23.963 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:23.975 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:23.984 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:23.991 > git config core.sparsecheckout # timeout=10 00:00:23.994 > git checkout -f f1a181ac34ff9dc22c85383e5b547bbedfdae1bf # timeout=10 00:00:24.207 Commit message: "test/scheduler: Drop cpufreq_high_prio[@]" 00:00:24.207 > git rev-list --no-walk afa4bfe23022777522cacbcc424820cf1c23bf60 # timeout=10 00:00:24.233 > git remote # timeout=10 00:00:24.237 > git submodule init # timeout=10 00:00:24.292 > git submodule sync # timeout=10 00:00:24.343 > git config --get remote.origin.url # timeout=10 00:00:24.351 > git submodule init # timeout=10 00:00:24.395 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:24.399 > git config --get submodule.dpdk.url # timeout=10 00:00:24.403 > git remote # timeout=10 00:00:24.407 > git config --get remote.origin.url # timeout=10 00:00:24.411 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:24.414 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:24.417 > git remote # timeout=10 00:00:24.420 > git config --get remote.origin.url # timeout=10 00:00:24.424 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:24.427 > git config --get submodule.isa-l.url # timeout=10 00:00:24.430 > git remote # timeout=10 00:00:24.434 > git config --get remote.origin.url # timeout=10 00:00:24.437 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:24.440 > git config --get submodule.ocf.url # timeout=10 00:00:24.444 > git remote # timeout=10 00:00:24.448 > git config --get remote.origin.url # timeout=10 00:00:24.451 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:24.454 > git config --get submodule.libvfio-user.url # timeout=10 00:00:24.457 > git remote # timeout=10 00:00:24.461 > git config --get remote.origin.url # timeout=10 00:00:24.465 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:24.468 > git config --get submodule.xnvme.url # timeout=10 00:00:24.471 > git remote # timeout=10 00:00:24.475 > git config --get remote.origin.url # timeout=10 00:00:24.478 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:24.482 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:24.485 > git remote # timeout=10 00:00:24.489 > git config --get remote.origin.url # timeout=10 00:00:24.492 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:24.498 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:24.498 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:24.498 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:24.498 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:24.499 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:24.499 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:24.499 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:24.502 Setting http proxy: proxy-dmz.intel.com:911 00:00:24.502 Setting http proxy: proxy-dmz.intel.com:911 00:00:24.502 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:24.502 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:24.502 Setting http proxy: proxy-dmz.intel.com:911 00:00:24.502 Setting http proxy: proxy-dmz.intel.com:911 00:00:24.502 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:00:24.502 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:24.502 Setting http proxy: proxy-dmz.intel.com:911 00:00:24.502 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:24.503 Setting http proxy: proxy-dmz.intel.com:911 00:00:24.503 Setting http proxy: proxy-dmz.intel.com:911 00:00:24.503 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:24.503 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:00:52.149 [Pipeline] dir 00:00:52.150 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:52.152 [Pipeline] { 00:00:52.167 [Pipeline] sh 00:00:52.448 ++ nproc 00:00:52.448 + threads=88 00:00:52.448 + git repack -a -d --threads=88 00:00:56.642 + git submodule foreach git repack -a -d --threads=88 00:00:56.642 Entering 'dpdk' 00:01:00.833 Entering 'intel-ipsec-mb' 00:01:00.833 Entering 'isa-l' 00:01:00.833 Entering 'isa-l-crypto' 00:01:01.091 Entering 'libvfio-user' 00:01:01.350 Entering 'ocf' 00:01:01.610 Entering 'xnvme' 00:01:02.176 + find .git -type f -name alternates -print -delete 00:01:02.176 .git/objects/info/alternates 00:01:02.176 .git/modules/libvfio-user/objects/info/alternates 00:01:02.176 .git/modules/intel-ipsec-mb/objects/info/alternates 00:01:02.176 .git/modules/isa-l/objects/info/alternates 00:01:02.176 .git/modules/isa-l-crypto/objects/info/alternates 00:01:02.176 .git/modules/dpdk/objects/info/alternates 00:01:02.176 .git/modules/ocf/objects/info/alternates 00:01:02.176 .git/modules/xnvme/objects/info/alternates 00:01:02.185 [Pipeline] } 00:01:02.199 [Pipeline] // dir 00:01:02.203 [Pipeline] } 00:01:02.216 [Pipeline] // retry 00:01:02.223 [Pipeline] sh 00:01:02.520 + hash pigz 00:01:02.520 + tar -czf spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz spdk 00:01:14.755 [Pipeline] retry 00:01:14.758 [Pipeline] { 00:01:14.773 [Pipeline] httpRequest 00:01:14.780 HttpMethod: PUT 00:01:14.781 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz 00:01:14.782 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz 00:01:17.238 Response Code: HTTP/1.1 200 OK 00:01:17.247 Success: Status code 200 is in the accepted range: 200 00:01:17.250 [Pipeline] } 00:01:17.271 [Pipeline] // retry 00:01:17.280 [Pipeline] echo 00:01:17.282 00:01:17.282 Locking 00:01:17.282 Waited 0s for lock 00:01:17.282 File already exists: /storage/packages/spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz 00:01:17.282 00:01:17.287 [Pipeline] sh 00:01:17.582 + git -C spdk log --oneline -n5 00:01:17.583 f1a181ac3 test/scheduler: Drop cpufreq_high_prio[@] 00:01:17.583 e081e4a1a test/scheduler: Calculate freq turbo range based on sysfs 00:01:17.583 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:17.583 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:17.583 4bcab9fb9 correct kick for CQ full case 00:01:17.601 [Pipeline] writeFile 00:01:17.616 [Pipeline] sh 00:01:17.898 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:17.910 [Pipeline] sh 00:01:18.190 + cat autorun-spdk.conf 00:01:18.190 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.190 SPDK_TEST_NVMF=1 00:01:18.190 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.190 SPDK_TEST_URING=1 00:01:18.190 SPDK_TEST_USDT=1 00:01:18.190 SPDK_RUN_UBSAN=1 00:01:18.190 NET_TYPE=virt 00:01:18.190 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.197 RUN_NIGHTLY=0 00:01:18.199 [Pipeline] } 00:01:18.212 [Pipeline] // stage 00:01:18.227 [Pipeline] stage 00:01:18.229 [Pipeline] { (Run VM) 00:01:18.242 [Pipeline] sh 00:01:18.524 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:18.524 + echo 'Start stage prepare_nvme.sh' 00:01:18.524 Start stage prepare_nvme.sh 00:01:18.524 + [[ -n 4 ]] 00:01:18.524 + disk_prefix=ex4 00:01:18.524 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:18.524 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:18.524 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:18.524 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.524 ++ SPDK_TEST_NVMF=1 00:01:18.524 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.524 ++ SPDK_TEST_URING=1 00:01:18.524 ++ SPDK_TEST_USDT=1 00:01:18.524 ++ SPDK_RUN_UBSAN=1 00:01:18.524 ++ NET_TYPE=virt 00:01:18.524 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.524 ++ RUN_NIGHTLY=0 00:01:18.524 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:18.524 + nvme_files=() 00:01:18.524 + declare -A nvme_files 00:01:18.524 + backend_dir=/var/lib/libvirt/images/backends 00:01:18.524 + nvme_files['nvme.img']=5G 00:01:18.524 + nvme_files['nvme-cmb.img']=5G 00:01:18.524 + nvme_files['nvme-multi0.img']=4G 00:01:18.524 + nvme_files['nvme-multi1.img']=4G 00:01:18.524 + nvme_files['nvme-multi2.img']=4G 00:01:18.524 + nvme_files['nvme-openstack.img']=8G 00:01:18.524 + nvme_files['nvme-zns.img']=5G 00:01:18.524 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:18.524 + (( SPDK_TEST_FTL == 1 )) 00:01:18.524 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:18.524 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:18.524 + for nvme in "${!nvme_files[@]}" 00:01:18.525 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:18.525 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.525 + for nvme in "${!nvme_files[@]}" 00:01:18.525 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:18.525 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.525 + for nvme in "${!nvme_files[@]}" 00:01:18.525 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:18.525 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:18.525 + for nvme in "${!nvme_files[@]}" 00:01:18.525 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:18.525 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.525 + for nvme in "${!nvme_files[@]}" 00:01:18.525 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:18.525 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.525 + for nvme in "${!nvme_files[@]}" 00:01:18.525 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:18.525 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.525 + for nvme in "${!nvme_files[@]}" 00:01:18.525 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:19.462 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.462 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:19.462 + echo 'End stage prepare_nvme.sh' 00:01:19.462 End stage prepare_nvme.sh 00:01:19.474 [Pipeline] sh 00:01:19.755 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:19.755 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:19.755 00:01:19.755 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:19.755 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:19.755 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:19.755 HELP=0 00:01:19.755 DRY_RUN=0 00:01:19.755 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:19.755 NVME_DISKS_TYPE=nvme,nvme, 00:01:19.755 NVME_AUTO_CREATE=0 00:01:19.755 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:19.755 NVME_CMB=,, 00:01:19.755 NVME_PMR=,, 00:01:19.755 NVME_ZNS=,, 00:01:19.755 NVME_MS=,, 00:01:19.755 NVME_FDP=,, 00:01:19.755 SPDK_VAGRANT_DISTRO=fedora39 00:01:19.755 SPDK_VAGRANT_VMCPU=10 00:01:19.755 SPDK_VAGRANT_VMRAM=12288 00:01:19.755 SPDK_VAGRANT_PROVIDER=libvirt 00:01:19.755 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:19.755 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:19.755 SPDK_OPENSTACK_NETWORK=0 00:01:19.755 VAGRANT_PACKAGE_BOX=0 00:01:19.755 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:19.756 FORCE_DISTRO=true 00:01:19.756 VAGRANT_BOX_VERSION= 00:01:19.756 EXTRA_VAGRANTFILES= 00:01:19.756 NIC_MODEL=e1000 00:01:19.756 00:01:19.756 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:19.756 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:22.286 Bringing machine 'default' up with 'libvirt' provider... 00:01:23.223 ==> default: Creating image (snapshot of base box volume). 00:01:23.223 ==> default: Creating domain with the following settings... 00:01:23.223 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731667509_5307ee855fbfdebdbf89 00:01:23.223 ==> default: -- Domain type: kvm 00:01:23.223 ==> default: -- Cpus: 10 00:01:23.223 ==> default: -- Feature: acpi 00:01:23.223 ==> default: -- Feature: apic 00:01:23.223 ==> default: -- Feature: pae 00:01:23.223 ==> default: -- Memory: 12288M 00:01:23.223 ==> default: -- Memory Backing: hugepages: 00:01:23.223 ==> default: -- Management MAC: 00:01:23.223 ==> default: -- Loader: 00:01:23.223 ==> default: -- Nvram: 00:01:23.223 ==> default: -- Base box: spdk/fedora39 00:01:23.223 ==> default: -- Storage pool: default 00:01:23.223 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731667509_5307ee855fbfdebdbf89.img (20G) 00:01:23.223 ==> default: -- Volume Cache: default 00:01:23.223 ==> default: -- Kernel: 00:01:23.223 ==> default: -- Initrd: 00:01:23.223 ==> default: -- Graphics Type: vnc 00:01:23.223 ==> default: -- Graphics Port: -1 00:01:23.223 ==> default: -- Graphics IP: 127.0.0.1 00:01:23.223 ==> default: -- Graphics Password: Not defined 00:01:23.223 ==> default: -- Video Type: cirrus 00:01:23.223 ==> default: -- Video VRAM: 9216 00:01:23.223 ==> default: -- Sound Type: 00:01:23.223 ==> default: -- Keymap: en-us 00:01:23.223 ==> default: -- TPM Path: 00:01:23.223 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:23.223 ==> default: -- Command line args: 00:01:23.223 ==> default: -> value=-device, 00:01:23.223 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:23.223 ==> default: -> value=-drive, 00:01:23.223 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:23.223 ==> default: -> value=-device, 00:01:23.223 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.223 ==> default: -> value=-device, 00:01:23.223 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:23.223 ==> default: -> value=-drive, 00:01:23.223 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:23.223 ==> default: -> value=-device, 00:01:23.223 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.223 ==> default: -> value=-drive, 00:01:23.223 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:23.223 ==> default: -> value=-device, 00:01:23.223 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.223 ==> default: -> value=-drive, 00:01:23.223 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:23.223 ==> default: -> value=-device, 00:01:23.223 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:23.224 ==> default: Creating shared folders metadata... 00:01:23.224 ==> default: Starting domain. 00:01:25.123 ==> default: Waiting for domain to get an IP address... 00:01:40.002 ==> default: Waiting for SSH to become available... 00:01:41.380 ==> default: Configuring and enabling network interfaces... 00:01:45.635 default: SSH address: 192.168.121.129:22 00:01:45.635 default: SSH username: vagrant 00:01:45.635 default: SSH auth method: private key 00:01:47.538 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:55.657 ==> default: Mounting SSHFS shared folder... 00:01:57.036 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:57.036 ==> default: Checking Mount.. 00:01:57.997 ==> default: Folder Successfully Mounted! 00:01:57.997 ==> default: Running provisioner: file... 00:01:58.937 default: ~/.gitconfig => .gitconfig 00:01:59.502 00:01:59.502 SUCCESS! 00:01:59.502 00:01:59.502 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:59.502 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:59.502 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:59.502 00:01:59.509 [Pipeline] } 00:01:59.520 [Pipeline] // stage 00:01:59.526 [Pipeline] dir 00:01:59.527 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:59.528 [Pipeline] { 00:01:59.536 [Pipeline] catchError 00:01:59.537 [Pipeline] { 00:01:59.544 [Pipeline] sh 00:01:59.818 + vagrant ssh-config --host vagrant 00:01:59.818 + sed -ne /^Host/,$p 00:01:59.818 + tee ssh_conf 00:02:03.127 Host vagrant 00:02:03.127 HostName 192.168.121.129 00:02:03.127 User vagrant 00:02:03.127 Port 22 00:02:03.127 UserKnownHostsFile /dev/null 00:02:03.127 StrictHostKeyChecking no 00:02:03.127 PasswordAuthentication no 00:02:03.127 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:03.127 IdentitiesOnly yes 00:02:03.127 LogLevel FATAL 00:02:03.127 ForwardAgent yes 00:02:03.127 ForwardX11 yes 00:02:03.127 00:02:03.203 [Pipeline] withEnv 00:02:03.205 [Pipeline] { 00:02:03.219 [Pipeline] sh 00:02:03.497 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:03.497 source /etc/os-release 00:02:03.497 [[ -e /image.version ]] && img=$(< /image.version) 00:02:03.497 # Minimal, systemd-like check. 00:02:03.497 if [[ -e /.dockerenv ]]; then 00:02:03.497 # Clear garbage from the node's name: 00:02:03.497 # agt-er_autotest_547-896 -> autotest_547-896 00:02:03.497 # $HOSTNAME is the actual container id 00:02:03.497 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:03.497 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:03.497 # We can assume this is a mount from a host where container is running, 00:02:03.497 # so fetch its hostname to easily identify the target swarm worker. 00:02:03.497 container="$(< /etc/hostname) ($agent)" 00:02:03.497 else 00:02:03.497 # Fallback 00:02:03.497 container=$agent 00:02:03.497 fi 00:02:03.497 fi 00:02:03.497 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:03.497 00:02:03.767 [Pipeline] } 00:02:03.781 [Pipeline] // withEnv 00:02:03.790 [Pipeline] setCustomBuildProperty 00:02:03.805 [Pipeline] stage 00:02:03.807 [Pipeline] { (Tests) 00:02:03.822 [Pipeline] sh 00:02:04.101 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:04.373 [Pipeline] sh 00:02:04.657 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:04.933 [Pipeline] timeout 00:02:04.933 Timeout set to expire in 1 hr 0 min 00:02:04.935 [Pipeline] { 00:02:04.952 [Pipeline] sh 00:02:05.232 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:05.799 HEAD is now at f1a181ac3 test/scheduler: Drop cpufreq_high_prio[@] 00:02:05.811 [Pipeline] sh 00:02:06.089 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:06.361 [Pipeline] sh 00:02:06.641 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:06.916 [Pipeline] sh 00:02:07.196 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:07.454 ++ readlink -f spdk_repo 00:02:07.454 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:07.454 + [[ -n /home/vagrant/spdk_repo ]] 00:02:07.454 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:07.454 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:07.454 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:07.454 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:07.454 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:07.454 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:07.454 + cd /home/vagrant/spdk_repo 00:02:07.454 + source /etc/os-release 00:02:07.454 ++ NAME='Fedora Linux' 00:02:07.454 ++ VERSION='39 (Cloud Edition)' 00:02:07.454 ++ ID=fedora 00:02:07.454 ++ VERSION_ID=39 00:02:07.454 ++ VERSION_CODENAME= 00:02:07.454 ++ PLATFORM_ID=platform:f39 00:02:07.454 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:07.454 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:07.454 ++ LOGO=fedora-logo-icon 00:02:07.454 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:07.454 ++ HOME_URL=https://fedoraproject.org/ 00:02:07.454 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:07.454 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:07.454 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:07.454 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:07.454 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:07.454 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:07.454 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:07.454 ++ SUPPORT_END=2024-11-12 00:02:07.454 ++ VARIANT='Cloud Edition' 00:02:07.454 ++ VARIANT_ID=cloud 00:02:07.454 + uname -a 00:02:07.454 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:07.454 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:07.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:07.713 Hugepages 00:02:07.713 node hugesize free / total 00:02:07.713 node0 1048576kB 0 / 0 00:02:07.972 node0 2048kB 0 / 0 00:02:07.972 00:02:07.972 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.972 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:07.972 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:07.972 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:07.972 + rm -f /tmp/spdk-ld-path 00:02:07.972 + source autorun-spdk.conf 00:02:07.972 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.972 ++ SPDK_TEST_NVMF=1 00:02:07.972 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.972 ++ SPDK_TEST_URING=1 00:02:07.972 ++ SPDK_TEST_USDT=1 00:02:07.972 ++ SPDK_RUN_UBSAN=1 00:02:07.972 ++ NET_TYPE=virt 00:02:07.972 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.972 ++ RUN_NIGHTLY=0 00:02:07.972 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.972 + [[ -n '' ]] 00:02:07.972 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:07.972 + for M in /var/spdk/build-*-manifest.txt 00:02:07.972 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:07.972 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.972 + for M in /var/spdk/build-*-manifest.txt 00:02:07.972 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.972 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.972 + for M in /var/spdk/build-*-manifest.txt 00:02:07.972 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.972 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:07.972 ++ uname 00:02:07.972 + [[ Linux == \L\i\n\u\x ]] 00:02:07.973 + sudo dmesg -T 00:02:07.973 + sudo dmesg --clear 00:02:07.973 + dmesg_pid=5188 00:02:07.973 + sudo dmesg -Tw 00:02:07.973 + [[ Fedora Linux == FreeBSD ]] 00:02:07.973 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.973 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.973 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:07.973 + [[ -x /usr/src/fio-static/fio ]] 00:02:07.973 + export FIO_BIN=/usr/src/fio-static/fio 00:02:07.973 + FIO_BIN=/usr/src/fio-static/fio 00:02:07.973 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:07.973 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:07.973 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:07.973 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.973 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.973 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:07.973 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.973 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.973 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.232 10:45:54 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:08.232 10:45:54 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.232 10:45:54 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.232 10:45:54 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:08.232 10:45:54 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.232 10:45:54 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:08.232 10:45:54 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:08.232 10:45:54 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:08.232 10:45:54 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:08.232 10:45:54 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.232 10:45:54 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:08.232 10:45:54 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:08.232 10:45:54 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.232 10:45:54 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:08.232 10:45:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:08.232 10:45:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:08.232 10:45:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.232 10:45:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.232 10:45:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.232 10:45:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.232 10:45:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.232 10:45:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.232 10:45:54 -- paths/export.sh@5 -- $ export PATH 00:02:08.232 10:45:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.232 10:45:54 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:08.232 10:45:54 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:08.232 10:45:54 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731667554.XXXXXX 00:02:08.232 10:45:54 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731667554.xroagO 00:02:08.232 10:45:54 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:08.232 10:45:54 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:08.232 10:45:54 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:08.232 10:45:54 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:08.232 10:45:54 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.232 10:45:54 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:08.232 10:45:54 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:08.232 10:45:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.232 10:45:54 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:08.232 10:45:54 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:08.232 10:45:54 -- pm/common@17 -- $ local monitor 00:02:08.232 10:45:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.232 10:45:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.232 10:45:54 -- pm/common@25 -- $ sleep 1 00:02:08.232 10:45:54 -- pm/common@21 -- $ date +%s 00:02:08.232 10:45:54 -- pm/common@21 -- $ date +%s 00:02:08.232 10:45:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731667554 00:02:08.232 10:45:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731667554 00:02:08.232 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731667554_collect-cpu-load.pm.log 00:02:08.232 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731667554_collect-vmstat.pm.log 00:02:09.170 10:45:55 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:09.170 10:45:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.170 10:45:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.170 10:45:55 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:09.170 10:45:55 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.170 Fri Nov 15 10:45:55 AM UTC 2024 00:02:09.170 10:45:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.170 v25.01-pre-191-gf1a181ac3 00:02:09.170 10:45:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:09.170 10:45:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.170 10:45:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.170 10:45:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:09.170 10:45:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:09.170 10:45:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.170 ************************************ 00:02:09.170 START TEST ubsan 00:02:09.170 ************************************ 00:02:09.170 using ubsan 00:02:09.170 10:45:55 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:09.170 00:02:09.170 real 0m0.000s 00:02:09.170 user 0m0.000s 00:02:09.170 sys 0m0.000s 00:02:09.170 10:45:55 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:09.170 10:45:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.170 ************************************ 00:02:09.170 END TEST ubsan 00:02:09.170 ************************************ 00:02:09.170 10:45:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:09.170 10:45:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:09.171 10:45:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:09.171 10:45:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:09.171 10:45:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:09.171 10:45:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:09.171 10:45:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:09.171 10:45:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:09.171 10:45:56 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:09.430 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:09.430 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.689 Using 'verbs' RDMA provider 00:02:25.535 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:37.806 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:37.806 Creating mk/config.mk...done. 00:02:37.806 Creating mk/cc.flags.mk...done. 00:02:37.806 Type 'make' to build. 00:02:37.806 10:46:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:37.806 10:46:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:37.806 10:46:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:37.806 10:46:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.806 ************************************ 00:02:37.806 START TEST make 00:02:37.806 ************************************ 00:02:37.806 10:46:23 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:37.806 make[1]: Nothing to be done for 'all'. 00:02:50.014 The Meson build system 00:02:50.014 Version: 1.5.0 00:02:50.014 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:50.014 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:50.014 Build type: native build 00:02:50.014 Program cat found: YES (/usr/bin/cat) 00:02:50.014 Project name: DPDK 00:02:50.014 Project version: 24.03.0 00:02:50.014 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:50.014 C linker for the host machine: cc ld.bfd 2.40-14 00:02:50.014 Host machine cpu family: x86_64 00:02:50.014 Host machine cpu: x86_64 00:02:50.014 Message: ## Building in Developer Mode ## 00:02:50.014 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:50.014 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:50.014 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:50.014 Program python3 found: YES (/usr/bin/python3) 00:02:50.014 Program cat found: YES (/usr/bin/cat) 00:02:50.014 Compiler for C supports arguments -march=native: YES 00:02:50.014 Checking for size of "void *" : 8 00:02:50.014 Checking for size of "void *" : 8 (cached) 00:02:50.014 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:50.014 Library m found: YES 00:02:50.014 Library numa found: YES 00:02:50.014 Has header "numaif.h" : YES 00:02:50.014 Library fdt found: NO 00:02:50.014 Library execinfo found: NO 00:02:50.014 Has header "execinfo.h" : YES 00:02:50.014 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:50.014 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:50.014 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:50.014 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:50.014 Run-time dependency openssl found: YES 3.1.1 00:02:50.014 Run-time dependency libpcap found: YES 1.10.4 00:02:50.014 Has header "pcap.h" with dependency libpcap: YES 00:02:50.014 Compiler for C supports arguments -Wcast-qual: YES 00:02:50.014 Compiler for C supports arguments -Wdeprecated: YES 00:02:50.014 Compiler for C supports arguments -Wformat: YES 00:02:50.014 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:50.014 Compiler for C supports arguments -Wformat-security: NO 00:02:50.014 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:50.014 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:50.014 Compiler for C supports arguments -Wnested-externs: YES 00:02:50.014 Compiler for C supports arguments -Wold-style-definition: YES 00:02:50.014 Compiler for C supports arguments -Wpointer-arith: YES 00:02:50.014 Compiler for C supports arguments -Wsign-compare: YES 00:02:50.014 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:50.014 Compiler for C supports arguments -Wundef: YES 00:02:50.014 Compiler for C supports arguments -Wwrite-strings: YES 00:02:50.014 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:50.014 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:50.014 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:50.014 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:50.014 Program objdump found: YES (/usr/bin/objdump) 00:02:50.014 Compiler for C supports arguments -mavx512f: YES 00:02:50.014 Checking if "AVX512 checking" compiles: YES 00:02:50.014 Fetching value of define "__SSE4_2__" : 1 00:02:50.014 Fetching value of define "__AES__" : 1 00:02:50.014 Fetching value of define "__AVX__" : 1 00:02:50.014 Fetching value of define "__AVX2__" : 1 00:02:50.014 Fetching value of define "__AVX512BW__" : (undefined) 00:02:50.014 Fetching value of define "__AVX512CD__" : (undefined) 00:02:50.014 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:50.014 Fetching value of define "__AVX512F__" : (undefined) 00:02:50.014 Fetching value of define "__AVX512VL__" : (undefined) 00:02:50.014 Fetching value of define "__PCLMUL__" : 1 00:02:50.014 Fetching value of define "__RDRND__" : 1 00:02:50.014 Fetching value of define "__RDSEED__" : 1 00:02:50.014 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:50.014 Fetching value of define "__znver1__" : (undefined) 00:02:50.014 Fetching value of define "__znver2__" : (undefined) 00:02:50.014 Fetching value of define "__znver3__" : (undefined) 00:02:50.014 Fetching value of define "__znver4__" : (undefined) 00:02:50.014 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:50.014 Message: lib/log: Defining dependency "log" 00:02:50.014 Message: lib/kvargs: Defining dependency "kvargs" 00:02:50.014 Message: lib/telemetry: Defining dependency "telemetry" 00:02:50.014 Checking for function "getentropy" : NO 00:02:50.014 Message: lib/eal: Defining dependency "eal" 00:02:50.014 Message: lib/ring: Defining dependency "ring" 00:02:50.014 Message: lib/rcu: Defining dependency "rcu" 00:02:50.014 Message: lib/mempool: Defining dependency "mempool" 00:02:50.014 Message: lib/mbuf: Defining dependency "mbuf" 00:02:50.014 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:50.014 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:50.014 Compiler for C supports arguments -mpclmul: YES 00:02:50.014 Compiler for C supports arguments -maes: YES 00:02:50.014 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:50.014 Compiler for C supports arguments -mavx512bw: YES 00:02:50.014 Compiler for C supports arguments -mavx512dq: YES 00:02:50.014 Compiler for C supports arguments -mavx512vl: YES 00:02:50.014 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:50.014 Compiler for C supports arguments -mavx2: YES 00:02:50.014 Compiler for C supports arguments -mavx: YES 00:02:50.014 Message: lib/net: Defining dependency "net" 00:02:50.014 Message: lib/meter: Defining dependency "meter" 00:02:50.015 Message: lib/ethdev: Defining dependency "ethdev" 00:02:50.015 Message: lib/pci: Defining dependency "pci" 00:02:50.015 Message: lib/cmdline: Defining dependency "cmdline" 00:02:50.015 Message: lib/hash: Defining dependency "hash" 00:02:50.015 Message: lib/timer: Defining dependency "timer" 00:02:50.015 Message: lib/compressdev: Defining dependency "compressdev" 00:02:50.015 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:50.015 Message: lib/dmadev: Defining dependency "dmadev" 00:02:50.015 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:50.015 Message: lib/power: Defining dependency "power" 00:02:50.015 Message: lib/reorder: Defining dependency "reorder" 00:02:50.015 Message: lib/security: Defining dependency "security" 00:02:50.015 Has header "linux/userfaultfd.h" : YES 00:02:50.015 Has header "linux/vduse.h" : YES 00:02:50.015 Message: lib/vhost: Defining dependency "vhost" 00:02:50.015 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:50.015 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:50.015 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:50.015 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:50.015 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:50.015 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:50.015 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:50.015 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:50.015 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:50.015 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:50.015 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:50.015 Configuring doxy-api-html.conf using configuration 00:02:50.015 Configuring doxy-api-man.conf using configuration 00:02:50.015 Program mandb found: YES (/usr/bin/mandb) 00:02:50.015 Program sphinx-build found: NO 00:02:50.015 Configuring rte_build_config.h using configuration 00:02:50.015 Message: 00:02:50.015 ================= 00:02:50.015 Applications Enabled 00:02:50.015 ================= 00:02:50.015 00:02:50.015 apps: 00:02:50.015 00:02:50.015 00:02:50.015 Message: 00:02:50.015 ================= 00:02:50.015 Libraries Enabled 00:02:50.015 ================= 00:02:50.015 00:02:50.015 libs: 00:02:50.015 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:50.015 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:50.015 cryptodev, dmadev, power, reorder, security, vhost, 00:02:50.015 00:02:50.015 Message: 00:02:50.015 =============== 00:02:50.015 Drivers Enabled 00:02:50.015 =============== 00:02:50.015 00:02:50.015 common: 00:02:50.015 00:02:50.015 bus: 00:02:50.015 pci, vdev, 00:02:50.015 mempool: 00:02:50.015 ring, 00:02:50.015 dma: 00:02:50.015 00:02:50.015 net: 00:02:50.015 00:02:50.015 crypto: 00:02:50.015 00:02:50.015 compress: 00:02:50.015 00:02:50.015 vdpa: 00:02:50.015 00:02:50.015 00:02:50.015 Message: 00:02:50.015 ================= 00:02:50.015 Content Skipped 00:02:50.015 ================= 00:02:50.015 00:02:50.015 apps: 00:02:50.015 dumpcap: explicitly disabled via build config 00:02:50.015 graph: explicitly disabled via build config 00:02:50.015 pdump: explicitly disabled via build config 00:02:50.015 proc-info: explicitly disabled via build config 00:02:50.015 test-acl: explicitly disabled via build config 00:02:50.015 test-bbdev: explicitly disabled via build config 00:02:50.015 test-cmdline: explicitly disabled via build config 00:02:50.015 test-compress-perf: explicitly disabled via build config 00:02:50.015 test-crypto-perf: explicitly disabled via build config 00:02:50.015 test-dma-perf: explicitly disabled via build config 00:02:50.015 test-eventdev: explicitly disabled via build config 00:02:50.015 test-fib: explicitly disabled via build config 00:02:50.015 test-flow-perf: explicitly disabled via build config 00:02:50.015 test-gpudev: explicitly disabled via build config 00:02:50.015 test-mldev: explicitly disabled via build config 00:02:50.015 test-pipeline: explicitly disabled via build config 00:02:50.015 test-pmd: explicitly disabled via build config 00:02:50.015 test-regex: explicitly disabled via build config 00:02:50.015 test-sad: explicitly disabled via build config 00:02:50.015 test-security-perf: explicitly disabled via build config 00:02:50.015 00:02:50.015 libs: 00:02:50.015 argparse: explicitly disabled via build config 00:02:50.015 metrics: explicitly disabled via build config 00:02:50.015 acl: explicitly disabled via build config 00:02:50.015 bbdev: explicitly disabled via build config 00:02:50.015 bitratestats: explicitly disabled via build config 00:02:50.015 bpf: explicitly disabled via build config 00:02:50.015 cfgfile: explicitly disabled via build config 00:02:50.015 distributor: explicitly disabled via build config 00:02:50.015 efd: explicitly disabled via build config 00:02:50.015 eventdev: explicitly disabled via build config 00:02:50.015 dispatcher: explicitly disabled via build config 00:02:50.015 gpudev: explicitly disabled via build config 00:02:50.015 gro: explicitly disabled via build config 00:02:50.015 gso: explicitly disabled via build config 00:02:50.015 ip_frag: explicitly disabled via build config 00:02:50.015 jobstats: explicitly disabled via build config 00:02:50.015 latencystats: explicitly disabled via build config 00:02:50.015 lpm: explicitly disabled via build config 00:02:50.015 member: explicitly disabled via build config 00:02:50.015 pcapng: explicitly disabled via build config 00:02:50.015 rawdev: explicitly disabled via build config 00:02:50.015 regexdev: explicitly disabled via build config 00:02:50.015 mldev: explicitly disabled via build config 00:02:50.015 rib: explicitly disabled via build config 00:02:50.015 sched: explicitly disabled via build config 00:02:50.015 stack: explicitly disabled via build config 00:02:50.015 ipsec: explicitly disabled via build config 00:02:50.015 pdcp: explicitly disabled via build config 00:02:50.015 fib: explicitly disabled via build config 00:02:50.015 port: explicitly disabled via build config 00:02:50.015 pdump: explicitly disabled via build config 00:02:50.015 table: explicitly disabled via build config 00:02:50.015 pipeline: explicitly disabled via build config 00:02:50.015 graph: explicitly disabled via build config 00:02:50.015 node: explicitly disabled via build config 00:02:50.015 00:02:50.015 drivers: 00:02:50.015 common/cpt: not in enabled drivers build config 00:02:50.015 common/dpaax: not in enabled drivers build config 00:02:50.015 common/iavf: not in enabled drivers build config 00:02:50.015 common/idpf: not in enabled drivers build config 00:02:50.015 common/ionic: not in enabled drivers build config 00:02:50.015 common/mvep: not in enabled drivers build config 00:02:50.015 common/octeontx: not in enabled drivers build config 00:02:50.015 bus/auxiliary: not in enabled drivers build config 00:02:50.015 bus/cdx: not in enabled drivers build config 00:02:50.015 bus/dpaa: not in enabled drivers build config 00:02:50.015 bus/fslmc: not in enabled drivers build config 00:02:50.015 bus/ifpga: not in enabled drivers build config 00:02:50.015 bus/platform: not in enabled drivers build config 00:02:50.015 bus/uacce: not in enabled drivers build config 00:02:50.015 bus/vmbus: not in enabled drivers build config 00:02:50.015 common/cnxk: not in enabled drivers build config 00:02:50.015 common/mlx5: not in enabled drivers build config 00:02:50.015 common/nfp: not in enabled drivers build config 00:02:50.015 common/nitrox: not in enabled drivers build config 00:02:50.015 common/qat: not in enabled drivers build config 00:02:50.015 common/sfc_efx: not in enabled drivers build config 00:02:50.015 mempool/bucket: not in enabled drivers build config 00:02:50.015 mempool/cnxk: not in enabled drivers build config 00:02:50.015 mempool/dpaa: not in enabled drivers build config 00:02:50.015 mempool/dpaa2: not in enabled drivers build config 00:02:50.015 mempool/octeontx: not in enabled drivers build config 00:02:50.015 mempool/stack: not in enabled drivers build config 00:02:50.015 dma/cnxk: not in enabled drivers build config 00:02:50.015 dma/dpaa: not in enabled drivers build config 00:02:50.015 dma/dpaa2: not in enabled drivers build config 00:02:50.015 dma/hisilicon: not in enabled drivers build config 00:02:50.015 dma/idxd: not in enabled drivers build config 00:02:50.015 dma/ioat: not in enabled drivers build config 00:02:50.015 dma/skeleton: not in enabled drivers build config 00:02:50.015 net/af_packet: not in enabled drivers build config 00:02:50.015 net/af_xdp: not in enabled drivers build config 00:02:50.015 net/ark: not in enabled drivers build config 00:02:50.015 net/atlantic: not in enabled drivers build config 00:02:50.015 net/avp: not in enabled drivers build config 00:02:50.015 net/axgbe: not in enabled drivers build config 00:02:50.015 net/bnx2x: not in enabled drivers build config 00:02:50.015 net/bnxt: not in enabled drivers build config 00:02:50.015 net/bonding: not in enabled drivers build config 00:02:50.015 net/cnxk: not in enabled drivers build config 00:02:50.015 net/cpfl: not in enabled drivers build config 00:02:50.015 net/cxgbe: not in enabled drivers build config 00:02:50.015 net/dpaa: not in enabled drivers build config 00:02:50.015 net/dpaa2: not in enabled drivers build config 00:02:50.015 net/e1000: not in enabled drivers build config 00:02:50.015 net/ena: not in enabled drivers build config 00:02:50.015 net/enetc: not in enabled drivers build config 00:02:50.015 net/enetfec: not in enabled drivers build config 00:02:50.015 net/enic: not in enabled drivers build config 00:02:50.015 net/failsafe: not in enabled drivers build config 00:02:50.015 net/fm10k: not in enabled drivers build config 00:02:50.015 net/gve: not in enabled drivers build config 00:02:50.015 net/hinic: not in enabled drivers build config 00:02:50.015 net/hns3: not in enabled drivers build config 00:02:50.015 net/i40e: not in enabled drivers build config 00:02:50.015 net/iavf: not in enabled drivers build config 00:02:50.015 net/ice: not in enabled drivers build config 00:02:50.015 net/idpf: not in enabled drivers build config 00:02:50.015 net/igc: not in enabled drivers build config 00:02:50.015 net/ionic: not in enabled drivers build config 00:02:50.015 net/ipn3ke: not in enabled drivers build config 00:02:50.016 net/ixgbe: not in enabled drivers build config 00:02:50.016 net/mana: not in enabled drivers build config 00:02:50.016 net/memif: not in enabled drivers build config 00:02:50.016 net/mlx4: not in enabled drivers build config 00:02:50.016 net/mlx5: not in enabled drivers build config 00:02:50.016 net/mvneta: not in enabled drivers build config 00:02:50.016 net/mvpp2: not in enabled drivers build config 00:02:50.016 net/netvsc: not in enabled drivers build config 00:02:50.016 net/nfb: not in enabled drivers build config 00:02:50.016 net/nfp: not in enabled drivers build config 00:02:50.016 net/ngbe: not in enabled drivers build config 00:02:50.016 net/null: not in enabled drivers build config 00:02:50.016 net/octeontx: not in enabled drivers build config 00:02:50.016 net/octeon_ep: not in enabled drivers build config 00:02:50.016 net/pcap: not in enabled drivers build config 00:02:50.016 net/pfe: not in enabled drivers build config 00:02:50.016 net/qede: not in enabled drivers build config 00:02:50.016 net/ring: not in enabled drivers build config 00:02:50.016 net/sfc: not in enabled drivers build config 00:02:50.016 net/softnic: not in enabled drivers build config 00:02:50.016 net/tap: not in enabled drivers build config 00:02:50.016 net/thunderx: not in enabled drivers build config 00:02:50.016 net/txgbe: not in enabled drivers build config 00:02:50.016 net/vdev_netvsc: not in enabled drivers build config 00:02:50.016 net/vhost: not in enabled drivers build config 00:02:50.016 net/virtio: not in enabled drivers build config 00:02:50.016 net/vmxnet3: not in enabled drivers build config 00:02:50.016 raw/*: missing internal dependency, "rawdev" 00:02:50.016 crypto/armv8: not in enabled drivers build config 00:02:50.016 crypto/bcmfs: not in enabled drivers build config 00:02:50.016 crypto/caam_jr: not in enabled drivers build config 00:02:50.016 crypto/ccp: not in enabled drivers build config 00:02:50.016 crypto/cnxk: not in enabled drivers build config 00:02:50.016 crypto/dpaa_sec: not in enabled drivers build config 00:02:50.016 crypto/dpaa2_sec: not in enabled drivers build config 00:02:50.016 crypto/ipsec_mb: not in enabled drivers build config 00:02:50.016 crypto/mlx5: not in enabled drivers build config 00:02:50.016 crypto/mvsam: not in enabled drivers build config 00:02:50.016 crypto/nitrox: not in enabled drivers build config 00:02:50.016 crypto/null: not in enabled drivers build config 00:02:50.016 crypto/octeontx: not in enabled drivers build config 00:02:50.016 crypto/openssl: not in enabled drivers build config 00:02:50.016 crypto/scheduler: not in enabled drivers build config 00:02:50.016 crypto/uadk: not in enabled drivers build config 00:02:50.016 crypto/virtio: not in enabled drivers build config 00:02:50.016 compress/isal: not in enabled drivers build config 00:02:50.016 compress/mlx5: not in enabled drivers build config 00:02:50.016 compress/nitrox: not in enabled drivers build config 00:02:50.016 compress/octeontx: not in enabled drivers build config 00:02:50.016 compress/zlib: not in enabled drivers build config 00:02:50.016 regex/*: missing internal dependency, "regexdev" 00:02:50.016 ml/*: missing internal dependency, "mldev" 00:02:50.016 vdpa/ifc: not in enabled drivers build config 00:02:50.016 vdpa/mlx5: not in enabled drivers build config 00:02:50.016 vdpa/nfp: not in enabled drivers build config 00:02:50.016 vdpa/sfc: not in enabled drivers build config 00:02:50.016 event/*: missing internal dependency, "eventdev" 00:02:50.016 baseband/*: missing internal dependency, "bbdev" 00:02:50.016 gpu/*: missing internal dependency, "gpudev" 00:02:50.016 00:02:50.016 00:02:50.016 Build targets in project: 85 00:02:50.016 00:02:50.016 DPDK 24.03.0 00:02:50.016 00:02:50.016 User defined options 00:02:50.016 buildtype : debug 00:02:50.016 default_library : shared 00:02:50.016 libdir : lib 00:02:50.016 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:50.016 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:50.016 c_link_args : 00:02:50.016 cpu_instruction_set: native 00:02:50.016 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:50.016 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:50.016 enable_docs : false 00:02:50.016 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:50.016 enable_kmods : false 00:02:50.016 max_lcores : 128 00:02:50.016 tests : false 00:02:50.016 00:02:50.016 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:50.016 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:50.016 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:50.016 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:50.016 [3/268] Linking static target lib/librte_kvargs.a 00:02:50.016 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:50.016 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:50.016 [6/268] Linking static target lib/librte_log.a 00:02:50.016 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.016 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.016 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:50.016 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.016 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:50.016 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:50.275 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:50.275 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:50.275 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:50.275 [16/268] Linking static target lib/librte_telemetry.a 00:02:50.275 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:50.533 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.533 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:50.533 [20/268] Linking target lib/librte_log.so.24.1 00:02:50.791 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:50.791 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:50.791 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:51.049 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:51.049 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:51.049 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:51.049 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:51.049 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:51.049 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.049 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:51.307 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:51.307 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:51.307 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:51.307 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:51.307 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:51.566 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:51.566 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:51.825 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:51.825 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:51.825 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:51.825 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:52.084 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:52.084 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:52.084 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:52.084 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:52.342 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:52.342 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:52.342 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:52.342 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:52.601 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:52.601 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:52.859 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:52.859 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:52.859 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:52.859 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:52.859 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:53.118 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:53.118 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:53.377 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:53.377 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:53.377 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:53.636 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:53.636 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:53.636 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:53.894 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:53.894 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:53.894 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:53.894 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:54.153 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:54.414 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:54.414 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:54.414 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:54.414 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:54.672 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:54.672 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:54.672 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:54.672 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:54.672 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:54.672 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:54.931 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.189 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:55.189 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:55.447 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:55.447 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:55.447 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:55.447 [86/268] Linking static target lib/librte_ring.a 00:02:55.447 [87/268] Linking static target lib/librte_eal.a 00:02:55.447 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:55.447 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:55.447 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:55.705 [91/268] Linking static target lib/librte_rcu.a 00:02:55.705 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:55.705 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:55.705 [94/268] Linking static target lib/librte_mempool.a 00:02:55.964 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:55.964 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.964 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:55.964 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:55.964 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.224 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:56.224 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:56.482 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:56.482 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:56.482 [104/268] Linking static target lib/librte_mbuf.a 00:02:56.482 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:56.482 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:56.741 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:56.742 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:56.742 [109/268] Linking static target lib/librte_net.a 00:02:57.000 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.000 [111/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.000 [112/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:57.000 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:57.000 [114/268] Linking static target lib/librte_meter.a 00:02:57.306 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:57.306 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:57.306 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:57.564 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.564 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.823 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:57.823 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.081 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:58.081 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:58.340 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:58.340 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:58.600 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:58.600 [127/268] Linking static target lib/librte_pci.a 00:02:58.600 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:58.600 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:58.600 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:58.600 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:58.600 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:58.600 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:58.600 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:58.600 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:58.859 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:58.859 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:58.859 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.859 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:58.859 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:58.859 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:58.859 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:58.859 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:58.859 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:58.859 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:59.118 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:59.118 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:59.118 [148/268] Linking static target lib/librte_ethdev.a 00:02:59.377 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:59.377 [150/268] Linking static target lib/librte_cmdline.a 00:02:59.377 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:59.636 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:59.636 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:59.636 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:59.894 [155/268] Linking static target lib/librte_timer.a 00:02:59.894 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:59.894 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:59.894 [158/268] Linking static target lib/librte_hash.a 00:02:59.894 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:00.152 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.152 [161/268] Linking static target lib/librte_compressdev.a 00:03:00.411 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:00.411 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:00.411 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:00.411 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.670 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:00.930 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:00.930 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:00.930 [169/268] Linking static target lib/librte_dmadev.a 00:03:00.930 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:00.930 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:01.192 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:01.192 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:01.192 [174/268] Linking static target lib/librte_cryptodev.a 00:03:01.192 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.192 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.192 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.451 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:01.721 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:01.721 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:01.721 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:01.721 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:01.721 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:01.721 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.980 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:01.980 [186/268] Linking static target lib/librte_power.a 00:03:02.239 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:02.239 [188/268] Linking static target lib/librte_reorder.a 00:03:02.499 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:02.499 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:02.499 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:02.499 [192/268] Linking static target lib/librte_security.a 00:03:02.499 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:02.758 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.758 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:03.326 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.326 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:03.326 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.326 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:03.326 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:03.586 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:03.844 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.103 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.103 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.103 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.103 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.103 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.103 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.361 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.361 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:04.361 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.361 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.620 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:04.620 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.620 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.620 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:04.620 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:04.620 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.620 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.620 [220/268] Linking static target drivers/librte_bus_vdev.a 00:03:04.620 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:04.620 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:04.879 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:04.879 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.879 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.879 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:04.879 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.138 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.706 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:05.706 [230/268] Linking static target lib/librte_vhost.a 00:03:06.644 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.644 [232/268] Linking target lib/librte_eal.so.24.1 00:03:06.903 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:06.903 [234/268] Linking target lib/librte_ring.so.24.1 00:03:06.903 [235/268] Linking target lib/librte_timer.so.24.1 00:03:06.903 [236/268] Linking target lib/librte_meter.so.24.1 00:03:06.903 [237/268] Linking target lib/librte_pci.so.24.1 00:03:06.903 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:06.903 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:06.903 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:07.161 [241/268] Linking target lib/librte_mempool.so.24.1 00:03:07.161 [242/268] Linking target lib/librte_rcu.so.24.1 00:03:07.161 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:07.161 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:07.161 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:07.161 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:07.161 [247/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.161 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:07.161 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:07.161 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:07.161 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:07.161 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:07.161 [253/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.420 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:07.420 [255/268] Linking target lib/librte_net.so.24.1 00:03:07.420 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:07.420 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:07.420 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:07.679 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:07.679 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:07.679 [261/268] Linking target lib/librte_hash.so.24.1 00:03:07.679 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:07.679 [263/268] Linking target lib/librte_security.so.24.1 00:03:07.679 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:07.679 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:07.679 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:07.938 [267/268] Linking target lib/librte_power.so.24.1 00:03:07.938 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:07.938 INFO: autodetecting backend as ninja 00:03:07.938 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:34.509 CC lib/log/log.o 00:03:34.509 CC lib/log/log_flags.o 00:03:34.509 CC lib/log/log_deprecated.o 00:03:34.509 CC lib/ut_mock/mock.o 00:03:34.509 CC lib/ut/ut.o 00:03:34.509 LIB libspdk_ut.a 00:03:34.509 LIB libspdk_ut_mock.a 00:03:34.509 LIB libspdk_log.a 00:03:34.509 SO libspdk_ut.so.2.0 00:03:34.509 SO libspdk_ut_mock.so.6.0 00:03:34.509 SO libspdk_log.so.7.1 00:03:34.509 SYMLINK libspdk_ut.so 00:03:34.509 SYMLINK libspdk_ut_mock.so 00:03:34.509 SYMLINK libspdk_log.so 00:03:34.509 CC lib/dma/dma.o 00:03:34.509 CC lib/ioat/ioat.o 00:03:34.509 CC lib/util/base64.o 00:03:34.509 CC lib/util/bit_array.o 00:03:34.509 CC lib/util/cpuset.o 00:03:34.509 CC lib/util/crc16.o 00:03:34.509 CC lib/util/crc32.o 00:03:34.509 CC lib/util/crc32c.o 00:03:34.509 CXX lib/trace_parser/trace.o 00:03:34.509 CC lib/vfio_user/host/vfio_user_pci.o 00:03:34.509 CC lib/vfio_user/host/vfio_user.o 00:03:34.509 CC lib/util/crc32_ieee.o 00:03:34.509 CC lib/util/crc64.o 00:03:34.509 CC lib/util/dif.o 00:03:34.509 CC lib/util/fd.o 00:03:34.509 LIB libspdk_dma.a 00:03:34.509 CC lib/util/fd_group.o 00:03:34.509 SO libspdk_dma.so.5.0 00:03:34.509 LIB libspdk_ioat.a 00:03:34.509 SYMLINK libspdk_dma.so 00:03:34.509 CC lib/util/file.o 00:03:34.509 CC lib/util/hexlify.o 00:03:34.509 CC lib/util/iov.o 00:03:34.509 SO libspdk_ioat.so.7.0 00:03:34.509 CC lib/util/math.o 00:03:34.509 SYMLINK libspdk_ioat.so 00:03:34.509 CC lib/util/net.o 00:03:34.509 CC lib/util/pipe.o 00:03:34.509 LIB libspdk_vfio_user.a 00:03:34.509 SO libspdk_vfio_user.so.5.0 00:03:34.509 CC lib/util/strerror_tls.o 00:03:34.509 CC lib/util/string.o 00:03:34.509 SYMLINK libspdk_vfio_user.so 00:03:34.509 CC lib/util/uuid.o 00:03:34.509 CC lib/util/xor.o 00:03:34.509 CC lib/util/zipf.o 00:03:34.509 CC lib/util/md5.o 00:03:34.509 LIB libspdk_util.a 00:03:34.509 SO libspdk_util.so.10.1 00:03:34.509 LIB libspdk_trace_parser.a 00:03:34.509 SO libspdk_trace_parser.so.6.0 00:03:34.509 SYMLINK libspdk_util.so 00:03:34.509 SYMLINK libspdk_trace_parser.so 00:03:34.509 CC lib/rdma_utils/rdma_utils.o 00:03:34.509 CC lib/json/json_parse.o 00:03:34.509 CC lib/json/json_util.o 00:03:34.509 CC lib/vmd/vmd.o 00:03:34.509 CC lib/vmd/led.o 00:03:34.509 CC lib/env_dpdk/env.o 00:03:34.509 CC lib/env_dpdk/memory.o 00:03:34.509 CC lib/json/json_write.o 00:03:34.509 CC lib/conf/conf.o 00:03:34.509 CC lib/idxd/idxd.o 00:03:34.509 CC lib/idxd/idxd_user.o 00:03:34.509 CC lib/env_dpdk/pci.o 00:03:34.509 LIB libspdk_conf.a 00:03:34.509 CC lib/env_dpdk/init.o 00:03:34.509 SO libspdk_conf.so.6.0 00:03:34.509 LIB libspdk_rdma_utils.a 00:03:34.509 LIB libspdk_json.a 00:03:34.509 SO libspdk_rdma_utils.so.1.0 00:03:34.509 SO libspdk_json.so.6.0 00:03:34.509 SYMLINK libspdk_conf.so 00:03:34.509 CC lib/env_dpdk/threads.o 00:03:34.509 SYMLINK libspdk_rdma_utils.so 00:03:34.509 CC lib/env_dpdk/pci_ioat.o 00:03:34.509 CC lib/env_dpdk/pci_virtio.o 00:03:34.509 SYMLINK libspdk_json.so 00:03:34.509 CC lib/env_dpdk/pci_vmd.o 00:03:34.509 CC lib/env_dpdk/pci_idxd.o 00:03:34.509 CC lib/idxd/idxd_kernel.o 00:03:34.509 CC lib/env_dpdk/pci_event.o 00:03:34.509 CC lib/env_dpdk/sigbus_handler.o 00:03:34.509 CC lib/env_dpdk/pci_dpdk.o 00:03:34.509 LIB libspdk_vmd.a 00:03:34.509 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:34.509 SO libspdk_vmd.so.6.0 00:03:34.509 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:34.509 SYMLINK libspdk_vmd.so 00:03:34.509 LIB libspdk_idxd.a 00:03:34.509 SO libspdk_idxd.so.12.1 00:03:34.509 SYMLINK libspdk_idxd.so 00:03:34.509 CC lib/rdma_provider/common.o 00:03:34.509 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:34.509 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:34.509 CC lib/jsonrpc/jsonrpc_client.o 00:03:34.509 CC lib/jsonrpc/jsonrpc_server.o 00:03:34.509 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:34.509 LIB libspdk_rdma_provider.a 00:03:34.509 SO libspdk_rdma_provider.so.7.0 00:03:34.509 LIB libspdk_jsonrpc.a 00:03:34.509 SYMLINK libspdk_rdma_provider.so 00:03:34.509 SO libspdk_jsonrpc.so.6.0 00:03:34.509 SYMLINK libspdk_jsonrpc.so 00:03:34.509 LIB libspdk_env_dpdk.a 00:03:34.509 SO libspdk_env_dpdk.so.15.1 00:03:34.509 CC lib/rpc/rpc.o 00:03:34.509 SYMLINK libspdk_env_dpdk.so 00:03:34.509 LIB libspdk_rpc.a 00:03:34.767 SO libspdk_rpc.so.6.0 00:03:34.767 SYMLINK libspdk_rpc.so 00:03:35.026 CC lib/trace/trace.o 00:03:35.026 CC lib/trace/trace_flags.o 00:03:35.026 CC lib/trace/trace_rpc.o 00:03:35.026 CC lib/keyring/keyring.o 00:03:35.026 CC lib/keyring/keyring_rpc.o 00:03:35.026 CC lib/notify/notify.o 00:03:35.026 CC lib/notify/notify_rpc.o 00:03:35.284 LIB libspdk_notify.a 00:03:35.284 LIB libspdk_keyring.a 00:03:35.284 SO libspdk_notify.so.6.0 00:03:35.284 SO libspdk_keyring.so.2.0 00:03:35.284 LIB libspdk_trace.a 00:03:35.284 SYMLINK libspdk_notify.so 00:03:35.284 SO libspdk_trace.so.11.0 00:03:35.284 SYMLINK libspdk_keyring.so 00:03:35.284 SYMLINK libspdk_trace.so 00:03:35.542 CC lib/thread/thread.o 00:03:35.542 CC lib/thread/iobuf.o 00:03:35.542 CC lib/sock/sock.o 00:03:35.542 CC lib/sock/sock_rpc.o 00:03:36.110 LIB libspdk_sock.a 00:03:36.110 SO libspdk_sock.so.10.0 00:03:36.110 SYMLINK libspdk_sock.so 00:03:36.677 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:36.677 CC lib/nvme/nvme_ctrlr.o 00:03:36.677 CC lib/nvme/nvme_fabric.o 00:03:36.677 CC lib/nvme/nvme_ns.o 00:03:36.677 CC lib/nvme/nvme_ns_cmd.o 00:03:36.677 CC lib/nvme/nvme_pcie.o 00:03:36.677 CC lib/nvme/nvme_pcie_common.o 00:03:36.677 CC lib/nvme/nvme_qpair.o 00:03:36.677 CC lib/nvme/nvme.o 00:03:37.244 LIB libspdk_thread.a 00:03:37.244 SO libspdk_thread.so.11.0 00:03:37.244 SYMLINK libspdk_thread.so 00:03:37.244 CC lib/nvme/nvme_quirks.o 00:03:37.244 CC lib/nvme/nvme_transport.o 00:03:37.503 CC lib/nvme/nvme_discovery.o 00:03:37.503 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:37.503 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:37.503 CC lib/nvme/nvme_tcp.o 00:03:37.503 CC lib/nvme/nvme_opal.o 00:03:37.760 CC lib/nvme/nvme_io_msg.o 00:03:37.760 CC lib/nvme/nvme_poll_group.o 00:03:38.019 CC lib/accel/accel.o 00:03:38.019 CC lib/nvme/nvme_zns.o 00:03:38.277 CC lib/accel/accel_rpc.o 00:03:38.277 CC lib/blob/blobstore.o 00:03:38.277 CC lib/init/json_config.o 00:03:38.277 CC lib/virtio/virtio.o 00:03:38.535 CC lib/nvme/nvme_stubs.o 00:03:38.535 CC lib/nvme/nvme_auth.o 00:03:38.535 CC lib/accel/accel_sw.o 00:03:38.535 CC lib/init/subsystem.o 00:03:38.793 CC lib/virtio/virtio_vhost_user.o 00:03:38.793 CC lib/init/subsystem_rpc.o 00:03:38.793 CC lib/blob/request.o 00:03:38.793 CC lib/nvme/nvme_cuse.o 00:03:38.793 CC lib/init/rpc.o 00:03:38.793 CC lib/nvme/nvme_rdma.o 00:03:39.052 CC lib/virtio/virtio_vfio_user.o 00:03:39.052 LIB libspdk_init.a 00:03:39.052 CC lib/blob/zeroes.o 00:03:39.052 SO libspdk_init.so.6.0 00:03:39.052 CC lib/blob/blob_bs_dev.o 00:03:39.052 SYMLINK libspdk_init.so 00:03:39.052 CC lib/virtio/virtio_pci.o 00:03:39.310 LIB libspdk_accel.a 00:03:39.310 CC lib/fsdev/fsdev.o 00:03:39.310 CC lib/fsdev/fsdev_io.o 00:03:39.310 SO libspdk_accel.so.16.0 00:03:39.310 CC lib/fsdev/fsdev_rpc.o 00:03:39.310 SYMLINK libspdk_accel.so 00:03:39.568 CC lib/event/app.o 00:03:39.568 CC lib/event/log_rpc.o 00:03:39.568 LIB libspdk_virtio.a 00:03:39.568 CC lib/event/reactor.o 00:03:39.568 CC lib/bdev/bdev.o 00:03:39.568 SO libspdk_virtio.so.7.0 00:03:39.568 CC lib/bdev/bdev_rpc.o 00:03:39.568 SYMLINK libspdk_virtio.so 00:03:39.568 CC lib/event/app_rpc.o 00:03:39.568 CC lib/event/scheduler_static.o 00:03:39.826 CC lib/bdev/bdev_zone.o 00:03:39.826 CC lib/bdev/part.o 00:03:39.826 LIB libspdk_fsdev.a 00:03:39.826 SO libspdk_fsdev.so.2.0 00:03:39.826 CC lib/bdev/scsi_nvme.o 00:03:39.826 LIB libspdk_event.a 00:03:40.085 SYMLINK libspdk_fsdev.so 00:03:40.085 SO libspdk_event.so.14.0 00:03:40.085 SYMLINK libspdk_event.so 00:03:40.085 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:40.343 LIB libspdk_nvme.a 00:03:40.343 SO libspdk_nvme.so.15.0 00:03:40.602 SYMLINK libspdk_nvme.so 00:03:40.861 LIB libspdk_fuse_dispatcher.a 00:03:40.861 SO libspdk_fuse_dispatcher.so.1.0 00:03:40.861 SYMLINK libspdk_fuse_dispatcher.so 00:03:41.429 LIB libspdk_blob.a 00:03:41.688 SO libspdk_blob.so.11.0 00:03:41.688 SYMLINK libspdk_blob.so 00:03:41.947 CC lib/blobfs/tree.o 00:03:41.947 CC lib/blobfs/blobfs.o 00:03:41.947 CC lib/lvol/lvol.o 00:03:42.515 LIB libspdk_bdev.a 00:03:42.515 SO libspdk_bdev.so.17.0 00:03:42.773 SYMLINK libspdk_bdev.so 00:03:43.032 LIB libspdk_blobfs.a 00:03:43.032 CC lib/scsi/dev.o 00:03:43.032 CC lib/scsi/lun.o 00:03:43.032 CC lib/scsi/port.o 00:03:43.032 CC lib/nbd/nbd_rpc.o 00:03:43.032 CC lib/nbd/nbd.o 00:03:43.032 CC lib/nvmf/ctrlr.o 00:03:43.032 CC lib/ublk/ublk.o 00:03:43.032 CC lib/ftl/ftl_core.o 00:03:43.032 SO libspdk_blobfs.so.10.0 00:03:43.032 LIB libspdk_lvol.a 00:03:43.032 SO libspdk_lvol.so.10.0 00:03:43.032 SYMLINK libspdk_blobfs.so 00:03:43.032 CC lib/ftl/ftl_init.o 00:03:43.032 SYMLINK libspdk_lvol.so 00:03:43.032 CC lib/ftl/ftl_layout.o 00:03:43.032 CC lib/ftl/ftl_debug.o 00:03:43.290 CC lib/ftl/ftl_io.o 00:03:43.290 CC lib/scsi/scsi.o 00:03:43.290 CC lib/scsi/scsi_bdev.o 00:03:43.290 CC lib/scsi/scsi_pr.o 00:03:43.290 CC lib/scsi/scsi_rpc.o 00:03:43.290 CC lib/scsi/task.o 00:03:43.549 CC lib/ftl/ftl_sb.o 00:03:43.549 LIB libspdk_nbd.a 00:03:43.549 CC lib/ftl/ftl_l2p.o 00:03:43.549 CC lib/ftl/ftl_l2p_flat.o 00:03:43.549 SO libspdk_nbd.so.7.0 00:03:43.549 CC lib/ftl/ftl_nv_cache.o 00:03:43.549 SYMLINK libspdk_nbd.so 00:03:43.549 CC lib/ftl/ftl_band.o 00:03:43.549 CC lib/ublk/ublk_rpc.o 00:03:43.549 CC lib/ftl/ftl_band_ops.o 00:03:43.549 CC lib/ftl/ftl_writer.o 00:03:43.549 CC lib/ftl/ftl_rq.o 00:03:43.807 CC lib/ftl/ftl_reloc.o 00:03:43.807 CC lib/ftl/ftl_l2p_cache.o 00:03:43.807 LIB libspdk_scsi.a 00:03:43.807 SO libspdk_scsi.so.9.0 00:03:43.807 LIB libspdk_ublk.a 00:03:43.807 SO libspdk_ublk.so.3.0 00:03:43.807 CC lib/ftl/ftl_p2l.o 00:03:43.807 SYMLINK libspdk_scsi.so 00:03:43.807 CC lib/ftl/ftl_p2l_log.o 00:03:43.807 SYMLINK libspdk_ublk.so 00:03:43.807 CC lib/nvmf/ctrlr_discovery.o 00:03:44.066 CC lib/ftl/mngt/ftl_mngt.o 00:03:44.066 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:44.066 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:44.066 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:44.066 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:44.324 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:44.324 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:44.324 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:44.324 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:44.324 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:44.324 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:44.583 CC lib/iscsi/conn.o 00:03:44.583 CC lib/iscsi/init_grp.o 00:03:44.583 CC lib/iscsi/iscsi.o 00:03:44.583 CC lib/iscsi/param.o 00:03:44.583 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:44.583 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:44.583 CC lib/nvmf/ctrlr_bdev.o 00:03:44.583 CC lib/vhost/vhost.o 00:03:44.583 CC lib/vhost/vhost_rpc.o 00:03:44.842 CC lib/nvmf/subsystem.o 00:03:44.842 CC lib/iscsi/portal_grp.o 00:03:44.842 CC lib/iscsi/tgt_node.o 00:03:44.842 CC lib/iscsi/iscsi_subsystem.o 00:03:44.842 CC lib/ftl/utils/ftl_conf.o 00:03:45.100 CC lib/nvmf/nvmf.o 00:03:45.100 CC lib/ftl/utils/ftl_md.o 00:03:45.100 CC lib/ftl/utils/ftl_mempool.o 00:03:45.358 CC lib/vhost/vhost_scsi.o 00:03:45.358 CC lib/vhost/vhost_blk.o 00:03:45.358 CC lib/vhost/rte_vhost_user.o 00:03:45.358 CC lib/ftl/utils/ftl_bitmap.o 00:03:45.358 CC lib/ftl/utils/ftl_property.o 00:03:45.617 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:45.617 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:45.617 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:45.876 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:45.876 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:45.876 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:45.876 CC lib/nvmf/nvmf_rpc.o 00:03:45.876 CC lib/iscsi/iscsi_rpc.o 00:03:45.876 CC lib/nvmf/transport.o 00:03:45.876 CC lib/nvmf/tcp.o 00:03:46.135 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:46.135 CC lib/nvmf/stubs.o 00:03:46.135 CC lib/iscsi/task.o 00:03:46.135 CC lib/nvmf/mdns_server.o 00:03:46.135 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:46.135 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:46.393 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:46.393 LIB libspdk_iscsi.a 00:03:46.393 SO libspdk_iscsi.so.8.0 00:03:46.393 LIB libspdk_vhost.a 00:03:46.393 CC lib/nvmf/rdma.o 00:03:46.393 SO libspdk_vhost.so.8.0 00:03:46.393 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:46.652 CC lib/nvmf/auth.o 00:03:46.652 SYMLINK libspdk_iscsi.so 00:03:46.652 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:46.652 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:46.652 SYMLINK libspdk_vhost.so 00:03:46.652 CC lib/ftl/base/ftl_base_dev.o 00:03:46.652 CC lib/ftl/base/ftl_base_bdev.o 00:03:46.652 CC lib/ftl/ftl_trace.o 00:03:46.910 LIB libspdk_ftl.a 00:03:47.168 SO libspdk_ftl.so.9.0 00:03:47.427 SYMLINK libspdk_ftl.so 00:03:48.820 LIB libspdk_nvmf.a 00:03:48.820 SO libspdk_nvmf.so.20.0 00:03:48.820 SYMLINK libspdk_nvmf.so 00:03:49.398 CC module/env_dpdk/env_dpdk_rpc.o 00:03:49.398 CC module/accel/iaa/accel_iaa.o 00:03:49.398 CC module/accel/dsa/accel_dsa.o 00:03:49.398 CC module/sock/posix/posix.o 00:03:49.398 CC module/accel/error/accel_error.o 00:03:49.398 CC module/keyring/file/keyring.o 00:03:49.398 CC module/fsdev/aio/fsdev_aio.o 00:03:49.398 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:49.398 CC module/blob/bdev/blob_bdev.o 00:03:49.398 CC module/accel/ioat/accel_ioat.o 00:03:49.398 LIB libspdk_env_dpdk_rpc.a 00:03:49.398 SO libspdk_env_dpdk_rpc.so.6.0 00:03:49.398 SYMLINK libspdk_env_dpdk_rpc.so 00:03:49.398 CC module/accel/error/accel_error_rpc.o 00:03:49.657 CC module/keyring/file/keyring_rpc.o 00:03:49.657 CC module/accel/iaa/accel_iaa_rpc.o 00:03:49.657 CC module/accel/ioat/accel_ioat_rpc.o 00:03:49.658 LIB libspdk_scheduler_dynamic.a 00:03:49.658 LIB libspdk_accel_error.a 00:03:49.658 CC module/accel/dsa/accel_dsa_rpc.o 00:03:49.658 SO libspdk_scheduler_dynamic.so.4.0 00:03:49.658 LIB libspdk_blob_bdev.a 00:03:49.658 SO libspdk_accel_error.so.2.0 00:03:49.658 SO libspdk_blob_bdev.so.11.0 00:03:49.658 LIB libspdk_keyring_file.a 00:03:49.658 SYMLINK libspdk_scheduler_dynamic.so 00:03:49.658 CC module/sock/uring/uring.o 00:03:49.658 LIB libspdk_accel_iaa.a 00:03:49.658 SO libspdk_keyring_file.so.2.0 00:03:49.658 SYMLINK libspdk_accel_error.so 00:03:49.658 LIB libspdk_accel_ioat.a 00:03:49.658 SYMLINK libspdk_blob_bdev.so 00:03:49.658 SO libspdk_accel_iaa.so.3.0 00:03:49.658 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:49.916 LIB libspdk_accel_dsa.a 00:03:49.916 SO libspdk_accel_ioat.so.6.0 00:03:49.916 SYMLINK libspdk_keyring_file.so 00:03:49.916 SO libspdk_accel_dsa.so.5.0 00:03:49.916 SYMLINK libspdk_accel_iaa.so 00:03:49.916 SYMLINK libspdk_accel_ioat.so 00:03:49.916 CC module/fsdev/aio/linux_aio_mgr.o 00:03:49.916 SYMLINK libspdk_accel_dsa.so 00:03:49.916 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:49.916 CC module/scheduler/gscheduler/gscheduler.o 00:03:49.917 CC module/keyring/linux/keyring.o 00:03:49.917 CC module/keyring/linux/keyring_rpc.o 00:03:50.175 LIB libspdk_scheduler_dpdk_governor.a 00:03:50.175 LIB libspdk_sock_posix.a 00:03:50.175 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:50.175 LIB libspdk_fsdev_aio.a 00:03:50.175 LIB libspdk_scheduler_gscheduler.a 00:03:50.175 SO libspdk_sock_posix.so.6.0 00:03:50.175 CC module/bdev/delay/vbdev_delay.o 00:03:50.175 CC module/bdev/error/vbdev_error.o 00:03:50.175 SO libspdk_scheduler_gscheduler.so.4.0 00:03:50.175 SO libspdk_fsdev_aio.so.1.0 00:03:50.175 LIB libspdk_keyring_linux.a 00:03:50.175 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:50.175 CC module/blobfs/bdev/blobfs_bdev.o 00:03:50.175 SYMLINK libspdk_scheduler_gscheduler.so 00:03:50.175 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:50.175 CC module/bdev/error/vbdev_error_rpc.o 00:03:50.175 SYMLINK libspdk_sock_posix.so 00:03:50.175 SYMLINK libspdk_fsdev_aio.so 00:03:50.175 SO libspdk_keyring_linux.so.1.0 00:03:50.175 SYMLINK libspdk_keyring_linux.so 00:03:50.434 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:50.434 LIB libspdk_blobfs_bdev.a 00:03:50.434 CC module/bdev/lvol/vbdev_lvol.o 00:03:50.434 CC module/bdev/gpt/gpt.o 00:03:50.434 LIB libspdk_sock_uring.a 00:03:50.434 CC module/bdev/malloc/bdev_malloc.o 00:03:50.434 SO libspdk_blobfs_bdev.so.6.0 00:03:50.434 LIB libspdk_bdev_error.a 00:03:50.434 SO libspdk_sock_uring.so.5.0 00:03:50.434 SO libspdk_bdev_error.so.6.0 00:03:50.434 CC module/bdev/null/bdev_null.o 00:03:50.434 SYMLINK libspdk_blobfs_bdev.so 00:03:50.434 SYMLINK libspdk_sock_uring.so 00:03:50.434 SYMLINK libspdk_bdev_error.so 00:03:50.693 LIB libspdk_bdev_delay.a 00:03:50.693 CC module/bdev/nvme/bdev_nvme.o 00:03:50.693 SO libspdk_bdev_delay.so.6.0 00:03:50.693 CC module/bdev/gpt/vbdev_gpt.o 00:03:50.693 SYMLINK libspdk_bdev_delay.so 00:03:50.693 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:50.693 CC module/bdev/passthru/vbdev_passthru.o 00:03:50.693 CC module/bdev/raid/bdev_raid.o 00:03:50.693 CC module/bdev/split/vbdev_split.o 00:03:50.693 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:50.693 CC module/bdev/null/bdev_null_rpc.o 00:03:50.693 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:50.951 LIB libspdk_bdev_gpt.a 00:03:50.951 LIB libspdk_bdev_null.a 00:03:50.951 SO libspdk_bdev_gpt.so.6.0 00:03:50.951 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:50.951 CC module/bdev/split/vbdev_split_rpc.o 00:03:50.951 SO libspdk_bdev_null.so.6.0 00:03:50.951 LIB libspdk_bdev_malloc.a 00:03:50.951 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:50.951 SYMLINK libspdk_bdev_gpt.so 00:03:50.951 SO libspdk_bdev_malloc.so.6.0 00:03:50.951 SYMLINK libspdk_bdev_null.so 00:03:50.951 CC module/bdev/raid/bdev_raid_rpc.o 00:03:51.210 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:51.210 SYMLINK libspdk_bdev_malloc.so 00:03:51.210 LIB libspdk_bdev_split.a 00:03:51.210 LIB libspdk_bdev_passthru.a 00:03:51.210 SO libspdk_bdev_split.so.6.0 00:03:51.210 CC module/bdev/uring/bdev_uring.o 00:03:51.210 SO libspdk_bdev_passthru.so.6.0 00:03:51.210 CC module/bdev/aio/bdev_aio.o 00:03:51.210 SYMLINK libspdk_bdev_split.so 00:03:51.210 LIB libspdk_bdev_zone_block.a 00:03:51.210 CC module/bdev/raid/bdev_raid_sb.o 00:03:51.210 SYMLINK libspdk_bdev_passthru.so 00:03:51.210 SO libspdk_bdev_zone_block.so.6.0 00:03:51.210 CC module/bdev/raid/raid0.o 00:03:51.210 CC module/bdev/nvme/nvme_rpc.o 00:03:51.468 CC module/bdev/uring/bdev_uring_rpc.o 00:03:51.468 LIB libspdk_bdev_lvol.a 00:03:51.468 SO libspdk_bdev_lvol.so.6.0 00:03:51.468 SYMLINK libspdk_bdev_zone_block.so 00:03:51.468 CC module/bdev/aio/bdev_aio_rpc.o 00:03:51.468 SYMLINK libspdk_bdev_lvol.so 00:03:51.468 CC module/bdev/raid/raid1.o 00:03:51.468 CC module/bdev/raid/concat.o 00:03:51.468 LIB libspdk_bdev_uring.a 00:03:51.726 CC module/bdev/ftl/bdev_ftl.o 00:03:51.726 CC module/bdev/nvme/bdev_mdns_client.o 00:03:51.726 SO libspdk_bdev_uring.so.6.0 00:03:51.726 LIB libspdk_bdev_aio.a 00:03:51.726 SO libspdk_bdev_aio.so.6.0 00:03:51.726 CC module/bdev/iscsi/bdev_iscsi.o 00:03:51.726 SYMLINK libspdk_bdev_uring.so 00:03:51.726 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:51.726 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:51.726 SYMLINK libspdk_bdev_aio.so 00:03:51.726 CC module/bdev/nvme/vbdev_opal.o 00:03:51.726 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:51.726 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:51.726 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:51.984 LIB libspdk_bdev_raid.a 00:03:51.984 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:51.984 SO libspdk_bdev_raid.so.6.0 00:03:51.984 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:51.984 LIB libspdk_bdev_ftl.a 00:03:51.984 SO libspdk_bdev_ftl.so.6.0 00:03:51.984 SYMLINK libspdk_bdev_raid.so 00:03:51.984 SYMLINK libspdk_bdev_ftl.so 00:03:51.984 LIB libspdk_bdev_iscsi.a 00:03:52.243 SO libspdk_bdev_iscsi.so.6.0 00:03:52.243 SYMLINK libspdk_bdev_iscsi.so 00:03:52.243 LIB libspdk_bdev_virtio.a 00:03:52.243 SO libspdk_bdev_virtio.so.6.0 00:03:52.501 SYMLINK libspdk_bdev_virtio.so 00:03:53.437 LIB libspdk_bdev_nvme.a 00:03:53.437 SO libspdk_bdev_nvme.so.7.1 00:03:53.437 SYMLINK libspdk_bdev_nvme.so 00:03:54.012 CC module/event/subsystems/keyring/keyring.o 00:03:54.012 CC module/event/subsystems/sock/sock.o 00:03:54.012 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:54.012 CC module/event/subsystems/vmd/vmd.o 00:03:54.012 CC module/event/subsystems/iobuf/iobuf.o 00:03:54.012 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:54.012 CC module/event/subsystems/fsdev/fsdev.o 00:03:54.012 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:54.012 CC module/event/subsystems/scheduler/scheduler.o 00:03:54.012 LIB libspdk_event_sock.a 00:03:54.271 LIB libspdk_event_keyring.a 00:03:54.271 LIB libspdk_event_fsdev.a 00:03:54.271 LIB libspdk_event_vhost_blk.a 00:03:54.271 LIB libspdk_event_vmd.a 00:03:54.271 SO libspdk_event_sock.so.5.0 00:03:54.271 LIB libspdk_event_scheduler.a 00:03:54.271 LIB libspdk_event_iobuf.a 00:03:54.271 SO libspdk_event_keyring.so.1.0 00:03:54.271 SO libspdk_event_scheduler.so.4.0 00:03:54.271 SO libspdk_event_fsdev.so.1.0 00:03:54.271 SO libspdk_event_vhost_blk.so.3.0 00:03:54.271 SO libspdk_event_vmd.so.6.0 00:03:54.271 SO libspdk_event_iobuf.so.3.0 00:03:54.271 SYMLINK libspdk_event_sock.so 00:03:54.271 SYMLINK libspdk_event_keyring.so 00:03:54.271 SYMLINK libspdk_event_scheduler.so 00:03:54.271 SYMLINK libspdk_event_fsdev.so 00:03:54.271 SYMLINK libspdk_event_vhost_blk.so 00:03:54.271 SYMLINK libspdk_event_vmd.so 00:03:54.271 SYMLINK libspdk_event_iobuf.so 00:03:54.530 CC module/event/subsystems/accel/accel.o 00:03:54.789 LIB libspdk_event_accel.a 00:03:54.789 SO libspdk_event_accel.so.6.0 00:03:54.789 SYMLINK libspdk_event_accel.so 00:03:55.049 CC module/event/subsystems/bdev/bdev.o 00:03:55.307 LIB libspdk_event_bdev.a 00:03:55.307 SO libspdk_event_bdev.so.6.0 00:03:55.566 SYMLINK libspdk_event_bdev.so 00:03:55.567 CC module/event/subsystems/ublk/ublk.o 00:03:55.567 CC module/event/subsystems/nbd/nbd.o 00:03:55.567 CC module/event/subsystems/scsi/scsi.o 00:03:55.567 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:55.567 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:55.826 LIB libspdk_event_ublk.a 00:03:55.826 LIB libspdk_event_nbd.a 00:03:55.826 LIB libspdk_event_scsi.a 00:03:55.826 SO libspdk_event_ublk.so.3.0 00:03:55.826 SO libspdk_event_nbd.so.6.0 00:03:55.826 SO libspdk_event_scsi.so.6.0 00:03:55.826 SYMLINK libspdk_event_ublk.so 00:03:55.826 SYMLINK libspdk_event_nbd.so 00:03:56.084 SYMLINK libspdk_event_scsi.so 00:03:56.084 LIB libspdk_event_nvmf.a 00:03:56.084 SO libspdk_event_nvmf.so.6.0 00:03:56.084 SYMLINK libspdk_event_nvmf.so 00:03:56.084 CC module/event/subsystems/iscsi/iscsi.o 00:03:56.343 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:56.343 LIB libspdk_event_vhost_scsi.a 00:03:56.343 LIB libspdk_event_iscsi.a 00:03:56.343 SO libspdk_event_vhost_scsi.so.3.0 00:03:56.343 SO libspdk_event_iscsi.so.6.0 00:03:56.602 SYMLINK libspdk_event_vhost_scsi.so 00:03:56.602 SYMLINK libspdk_event_iscsi.so 00:03:56.602 SO libspdk.so.6.0 00:03:56.602 SYMLINK libspdk.so 00:03:56.862 TEST_HEADER include/spdk/accel.h 00:03:56.862 TEST_HEADER include/spdk/accel_module.h 00:03:56.862 TEST_HEADER include/spdk/assert.h 00:03:56.862 TEST_HEADER include/spdk/barrier.h 00:03:56.862 CC test/rpc_client/rpc_client_test.o 00:03:56.862 TEST_HEADER include/spdk/base64.h 00:03:56.862 CXX app/trace/trace.o 00:03:56.862 TEST_HEADER include/spdk/bdev.h 00:03:56.862 TEST_HEADER include/spdk/bdev_module.h 00:03:56.862 TEST_HEADER include/spdk/bdev_zone.h 00:03:56.862 TEST_HEADER include/spdk/bit_array.h 00:03:56.862 TEST_HEADER include/spdk/bit_pool.h 00:03:56.862 TEST_HEADER include/spdk/blob_bdev.h 00:03:56.862 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:56.862 TEST_HEADER include/spdk/blobfs.h 00:03:56.862 TEST_HEADER include/spdk/blob.h 00:03:57.121 TEST_HEADER include/spdk/conf.h 00:03:57.121 TEST_HEADER include/spdk/config.h 00:03:57.121 TEST_HEADER include/spdk/cpuset.h 00:03:57.121 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:57.121 TEST_HEADER include/spdk/crc16.h 00:03:57.121 TEST_HEADER include/spdk/crc32.h 00:03:57.121 TEST_HEADER include/spdk/crc64.h 00:03:57.121 TEST_HEADER include/spdk/dif.h 00:03:57.121 TEST_HEADER include/spdk/dma.h 00:03:57.121 TEST_HEADER include/spdk/endian.h 00:03:57.121 TEST_HEADER include/spdk/env_dpdk.h 00:03:57.121 TEST_HEADER include/spdk/env.h 00:03:57.121 TEST_HEADER include/spdk/event.h 00:03:57.121 TEST_HEADER include/spdk/fd_group.h 00:03:57.121 TEST_HEADER include/spdk/fd.h 00:03:57.121 TEST_HEADER include/spdk/file.h 00:03:57.121 TEST_HEADER include/spdk/fsdev.h 00:03:57.121 TEST_HEADER include/spdk/fsdev_module.h 00:03:57.121 TEST_HEADER include/spdk/ftl.h 00:03:57.121 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:57.121 TEST_HEADER include/spdk/gpt_spec.h 00:03:57.121 TEST_HEADER include/spdk/hexlify.h 00:03:57.121 CC examples/ioat/perf/perf.o 00:03:57.121 TEST_HEADER include/spdk/histogram_data.h 00:03:57.121 TEST_HEADER include/spdk/idxd.h 00:03:57.121 CC test/thread/poller_perf/poller_perf.o 00:03:57.121 TEST_HEADER include/spdk/idxd_spec.h 00:03:57.121 TEST_HEADER include/spdk/init.h 00:03:57.121 CC examples/util/zipf/zipf.o 00:03:57.121 TEST_HEADER include/spdk/ioat.h 00:03:57.121 TEST_HEADER include/spdk/ioat_spec.h 00:03:57.121 TEST_HEADER include/spdk/iscsi_spec.h 00:03:57.121 TEST_HEADER include/spdk/json.h 00:03:57.121 TEST_HEADER include/spdk/jsonrpc.h 00:03:57.121 TEST_HEADER include/spdk/keyring.h 00:03:57.121 TEST_HEADER include/spdk/keyring_module.h 00:03:57.121 TEST_HEADER include/spdk/likely.h 00:03:57.121 TEST_HEADER include/spdk/log.h 00:03:57.121 TEST_HEADER include/spdk/lvol.h 00:03:57.121 TEST_HEADER include/spdk/md5.h 00:03:57.121 TEST_HEADER include/spdk/memory.h 00:03:57.122 TEST_HEADER include/spdk/mmio.h 00:03:57.122 TEST_HEADER include/spdk/nbd.h 00:03:57.122 TEST_HEADER include/spdk/net.h 00:03:57.122 TEST_HEADER include/spdk/notify.h 00:03:57.122 CC test/app/bdev_svc/bdev_svc.o 00:03:57.122 TEST_HEADER include/spdk/nvme.h 00:03:57.122 TEST_HEADER include/spdk/nvme_intel.h 00:03:57.122 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:57.122 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:57.122 TEST_HEADER include/spdk/nvme_spec.h 00:03:57.122 TEST_HEADER include/spdk/nvme_zns.h 00:03:57.122 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:57.122 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:57.122 CC test/dma/test_dma/test_dma.o 00:03:57.122 TEST_HEADER include/spdk/nvmf.h 00:03:57.122 TEST_HEADER include/spdk/nvmf_spec.h 00:03:57.122 TEST_HEADER include/spdk/nvmf_transport.h 00:03:57.122 TEST_HEADER include/spdk/opal.h 00:03:57.122 TEST_HEADER include/spdk/opal_spec.h 00:03:57.122 TEST_HEADER include/spdk/pci_ids.h 00:03:57.122 TEST_HEADER include/spdk/pipe.h 00:03:57.122 TEST_HEADER include/spdk/queue.h 00:03:57.122 TEST_HEADER include/spdk/reduce.h 00:03:57.122 TEST_HEADER include/spdk/rpc.h 00:03:57.122 CC test/env/mem_callbacks/mem_callbacks.o 00:03:57.122 TEST_HEADER include/spdk/scheduler.h 00:03:57.122 TEST_HEADER include/spdk/scsi.h 00:03:57.122 TEST_HEADER include/spdk/scsi_spec.h 00:03:57.122 TEST_HEADER include/spdk/sock.h 00:03:57.122 TEST_HEADER include/spdk/stdinc.h 00:03:57.122 TEST_HEADER include/spdk/string.h 00:03:57.122 TEST_HEADER include/spdk/thread.h 00:03:57.122 TEST_HEADER include/spdk/trace.h 00:03:57.122 TEST_HEADER include/spdk/trace_parser.h 00:03:57.122 TEST_HEADER include/spdk/tree.h 00:03:57.122 TEST_HEADER include/spdk/ublk.h 00:03:57.122 TEST_HEADER include/spdk/util.h 00:03:57.122 TEST_HEADER include/spdk/uuid.h 00:03:57.122 TEST_HEADER include/spdk/version.h 00:03:57.122 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:57.122 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:57.122 TEST_HEADER include/spdk/vhost.h 00:03:57.122 TEST_HEADER include/spdk/vmd.h 00:03:57.122 TEST_HEADER include/spdk/xor.h 00:03:57.122 LINK rpc_client_test 00:03:57.122 TEST_HEADER include/spdk/zipf.h 00:03:57.122 CXX test/cpp_headers/accel.o 00:03:57.381 LINK interrupt_tgt 00:03:57.381 LINK poller_perf 00:03:57.381 LINK zipf 00:03:57.381 LINK ioat_perf 00:03:57.381 CXX test/cpp_headers/accel_module.o 00:03:57.381 LINK bdev_svc 00:03:57.381 LINK spdk_trace 00:03:57.640 CC examples/ioat/verify/verify.o 00:03:57.640 CC test/env/vtophys/vtophys.o 00:03:57.640 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:57.640 CC test/env/pci/pci_ut.o 00:03:57.640 CXX test/cpp_headers/assert.o 00:03:57.640 CC test/env/memory/memory_ut.o 00:03:57.640 LINK test_dma 00:03:57.640 LINK vtophys 00:03:57.640 LINK env_dpdk_post_init 00:03:57.640 CXX test/cpp_headers/barrier.o 00:03:57.640 CC app/trace_record/trace_record.o 00:03:57.897 LINK verify 00:03:57.897 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:57.897 CXX test/cpp_headers/base64.o 00:03:57.897 LINK mem_callbacks 00:03:57.897 CXX test/cpp_headers/bdev.o 00:03:57.897 LINK pci_ut 00:03:58.156 LINK spdk_trace_record 00:03:58.156 CXX test/cpp_headers/bdev_module.o 00:03:58.156 CC examples/vmd/lsvmd/lsvmd.o 00:03:58.156 CC examples/sock/hello_world/hello_sock.o 00:03:58.156 CC examples/thread/thread/thread_ex.o 00:03:58.156 CC test/event/event_perf/event_perf.o 00:03:58.156 CC test/nvme/aer/aer.o 00:03:58.156 LINK nvme_fuzz 00:03:58.156 LINK lsvmd 00:03:58.416 CXX test/cpp_headers/bdev_zone.o 00:03:58.416 CC test/nvme/reset/reset.o 00:03:58.416 CC app/nvmf_tgt/nvmf_main.o 00:03:58.416 LINK event_perf 00:03:58.416 LINK hello_sock 00:03:58.416 LINK thread 00:03:58.416 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:58.416 LINK aer 00:03:58.674 CXX test/cpp_headers/bit_array.o 00:03:58.674 CC examples/vmd/led/led.o 00:03:58.674 LINK nvmf_tgt 00:03:58.674 CC test/event/reactor/reactor.o 00:03:58.674 CC test/nvme/sgl/sgl.o 00:03:58.674 CXX test/cpp_headers/bit_pool.o 00:03:58.674 LINK reset 00:03:58.674 LINK led 00:03:58.674 LINK reactor 00:03:58.933 CXX test/cpp_headers/blob_bdev.o 00:03:58.933 LINK memory_ut 00:03:58.933 CC test/accel/dif/dif.o 00:03:58.933 LINK sgl 00:03:58.933 CC app/iscsi_tgt/iscsi_tgt.o 00:03:58.933 CC test/blobfs/mkfs/mkfs.o 00:03:58.933 CC test/event/reactor_perf/reactor_perf.o 00:03:58.933 CXX test/cpp_headers/blobfs_bdev.o 00:03:59.250 CXX test/cpp_headers/blobfs.o 00:03:59.250 CC examples/idxd/perf/perf.o 00:03:59.250 CC test/lvol/esnap/esnap.o 00:03:59.250 LINK reactor_perf 00:03:59.250 LINK iscsi_tgt 00:03:59.250 CC test/nvme/e2edp/nvme_dp.o 00:03:59.250 LINK mkfs 00:03:59.250 CXX test/cpp_headers/blob.o 00:03:59.250 CC test/nvme/overhead/overhead.o 00:03:59.519 CC test/event/app_repeat/app_repeat.o 00:03:59.519 CXX test/cpp_headers/conf.o 00:03:59.519 LINK idxd_perf 00:03:59.519 LINK nvme_dp 00:03:59.519 CC app/spdk_tgt/spdk_tgt.o 00:03:59.519 CC test/event/scheduler/scheduler.o 00:03:59.519 LINK app_repeat 00:03:59.519 LINK dif 00:03:59.519 CXX test/cpp_headers/config.o 00:03:59.519 LINK overhead 00:03:59.780 CXX test/cpp_headers/cpuset.o 00:03:59.780 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:59.780 LINK spdk_tgt 00:03:59.780 CXX test/cpp_headers/crc16.o 00:03:59.780 LINK scheduler 00:03:59.780 CXX test/cpp_headers/crc32.o 00:03:59.780 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:59.780 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:59.780 CC test/nvme/err_injection/err_injection.o 00:04:00.039 CC test/nvme/startup/startup.o 00:04:00.039 CXX test/cpp_headers/crc64.o 00:04:00.039 CC test/nvme/reserve/reserve.o 00:04:00.039 CC app/spdk_lspci/spdk_lspci.o 00:04:00.039 CC test/nvme/simple_copy/simple_copy.o 00:04:00.039 LINK hello_fsdev 00:04:00.039 LINK err_injection 00:04:00.039 CXX test/cpp_headers/dif.o 00:04:00.039 LINK startup 00:04:00.297 LINK spdk_lspci 00:04:00.297 LINK iscsi_fuzz 00:04:00.297 LINK reserve 00:04:00.297 LINK vhost_fuzz 00:04:00.297 CXX test/cpp_headers/dma.o 00:04:00.297 LINK simple_copy 00:04:00.297 CC test/nvme/connect_stress/connect_stress.o 00:04:00.297 CC test/nvme/boot_partition/boot_partition.o 00:04:00.556 CC app/spdk_nvme_perf/perf.o 00:04:00.556 CC examples/accel/perf/accel_perf.o 00:04:00.556 CXX test/cpp_headers/endian.o 00:04:00.556 CC app/spdk_nvme_identify/identify.o 00:04:00.556 CC test/nvme/compliance/nvme_compliance.o 00:04:00.556 CC test/app/jsoncat/jsoncat.o 00:04:00.556 CC test/app/histogram_perf/histogram_perf.o 00:04:00.556 LINK boot_partition 00:04:00.556 LINK connect_stress 00:04:00.556 CXX test/cpp_headers/env_dpdk.o 00:04:00.815 LINK jsoncat 00:04:00.815 LINK histogram_perf 00:04:00.815 CXX test/cpp_headers/env.o 00:04:00.815 CXX test/cpp_headers/event.o 00:04:00.815 LINK nvme_compliance 00:04:01.074 CXX test/cpp_headers/fd_group.o 00:04:01.074 LINK accel_perf 00:04:01.074 CC test/app/stub/stub.o 00:04:01.074 CC test/bdev/bdevio/bdevio.o 00:04:01.074 CC examples/nvme/hello_world/hello_world.o 00:04:01.074 CC test/nvme/fused_ordering/fused_ordering.o 00:04:01.074 CC examples/blob/hello_world/hello_blob.o 00:04:01.074 CXX test/cpp_headers/fd.o 00:04:01.074 LINK stub 00:04:01.333 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:01.333 LINK spdk_nvme_perf 00:04:01.333 LINK fused_ordering 00:04:01.333 LINK hello_world 00:04:01.333 LINK spdk_nvme_identify 00:04:01.333 CXX test/cpp_headers/file.o 00:04:01.333 LINK hello_blob 00:04:01.333 CC test/nvme/fdp/fdp.o 00:04:01.333 LINK doorbell_aers 00:04:01.333 LINK bdevio 00:04:01.592 CXX test/cpp_headers/fsdev.o 00:04:01.592 CC app/spdk_nvme_discover/discovery_aer.o 00:04:01.592 CC app/spdk_top/spdk_top.o 00:04:01.592 CC examples/nvme/reconnect/reconnect.o 00:04:01.592 CC test/nvme/cuse/cuse.o 00:04:01.851 CXX test/cpp_headers/fsdev_module.o 00:04:01.851 CC examples/blob/cli/blobcli.o 00:04:01.851 LINK fdp 00:04:01.851 CC app/vhost/vhost.o 00:04:01.851 CC examples/bdev/hello_world/hello_bdev.o 00:04:01.851 LINK spdk_nvme_discover 00:04:01.851 CXX test/cpp_headers/ftl.o 00:04:02.109 LINK vhost 00:04:02.109 LINK reconnect 00:04:02.109 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:02.109 LINK hello_bdev 00:04:02.109 CC examples/nvme/arbitration/arbitration.o 00:04:02.109 CXX test/cpp_headers/fuse_dispatcher.o 00:04:02.368 LINK blobcli 00:04:02.368 CC examples/bdev/bdevperf/bdevperf.o 00:04:02.368 CC app/spdk_dd/spdk_dd.o 00:04:02.368 CXX test/cpp_headers/gpt_spec.o 00:04:02.368 CC examples/nvme/hotplug/hotplug.o 00:04:02.368 LINK arbitration 00:04:02.368 CXX test/cpp_headers/hexlify.o 00:04:02.627 LINK spdk_top 00:04:02.627 LINK nvme_manage 00:04:02.627 LINK hotplug 00:04:02.627 CXX test/cpp_headers/histogram_data.o 00:04:02.627 CXX test/cpp_headers/idxd.o 00:04:02.627 CC app/fio/nvme/fio_plugin.o 00:04:02.627 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:02.886 LINK spdk_dd 00:04:02.886 CC examples/nvme/abort/abort.o 00:04:02.886 CXX test/cpp_headers/idxd_spec.o 00:04:02.886 CXX test/cpp_headers/init.o 00:04:02.886 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:02.886 LINK cmb_copy 00:04:03.144 CXX test/cpp_headers/ioat.o 00:04:03.144 LINK cuse 00:04:03.144 CXX test/cpp_headers/ioat_spec.o 00:04:03.144 LINK pmr_persistence 00:04:03.144 CXX test/cpp_headers/iscsi_spec.o 00:04:03.144 CC app/fio/bdev/fio_plugin.o 00:04:03.144 LINK bdevperf 00:04:03.144 LINK abort 00:04:03.144 CXX test/cpp_headers/json.o 00:04:03.144 CXX test/cpp_headers/jsonrpc.o 00:04:03.403 CXX test/cpp_headers/keyring.o 00:04:03.403 CXX test/cpp_headers/keyring_module.o 00:04:03.403 LINK spdk_nvme 00:04:03.403 CXX test/cpp_headers/likely.o 00:04:03.403 CXX test/cpp_headers/log.o 00:04:03.403 CXX test/cpp_headers/lvol.o 00:04:03.403 CXX test/cpp_headers/md5.o 00:04:03.403 CXX test/cpp_headers/memory.o 00:04:03.403 CXX test/cpp_headers/mmio.o 00:04:03.403 CXX test/cpp_headers/nbd.o 00:04:03.403 CXX test/cpp_headers/net.o 00:04:03.403 CXX test/cpp_headers/notify.o 00:04:03.662 CXX test/cpp_headers/nvme.o 00:04:03.662 CXX test/cpp_headers/nvme_intel.o 00:04:03.662 CC examples/nvmf/nvmf/nvmf.o 00:04:03.662 CXX test/cpp_headers/nvme_ocssd.o 00:04:03.662 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:03.662 CXX test/cpp_headers/nvme_spec.o 00:04:03.662 LINK spdk_bdev 00:04:03.662 CXX test/cpp_headers/nvme_zns.o 00:04:03.662 CXX test/cpp_headers/nvmf_cmd.o 00:04:03.662 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:03.662 CXX test/cpp_headers/nvmf.o 00:04:03.920 CXX test/cpp_headers/nvmf_spec.o 00:04:03.920 CXX test/cpp_headers/nvmf_transport.o 00:04:03.920 CXX test/cpp_headers/opal.o 00:04:03.920 CXX test/cpp_headers/opal_spec.o 00:04:03.920 CXX test/cpp_headers/pci_ids.o 00:04:03.920 LINK nvmf 00:04:03.920 CXX test/cpp_headers/pipe.o 00:04:03.920 CXX test/cpp_headers/queue.o 00:04:03.920 CXX test/cpp_headers/reduce.o 00:04:03.920 CXX test/cpp_headers/rpc.o 00:04:03.920 CXX test/cpp_headers/scheduler.o 00:04:03.920 CXX test/cpp_headers/scsi.o 00:04:03.920 CXX test/cpp_headers/scsi_spec.o 00:04:04.179 CXX test/cpp_headers/sock.o 00:04:04.179 CXX test/cpp_headers/stdinc.o 00:04:04.179 CXX test/cpp_headers/string.o 00:04:04.179 CXX test/cpp_headers/thread.o 00:04:04.179 CXX test/cpp_headers/trace.o 00:04:04.179 CXX test/cpp_headers/trace_parser.o 00:04:04.179 CXX test/cpp_headers/tree.o 00:04:04.179 CXX test/cpp_headers/ublk.o 00:04:04.179 CXX test/cpp_headers/util.o 00:04:04.179 CXX test/cpp_headers/uuid.o 00:04:04.179 CXX test/cpp_headers/version.o 00:04:04.179 CXX test/cpp_headers/vfio_user_pci.o 00:04:04.179 CXX test/cpp_headers/vfio_user_spec.o 00:04:04.179 CXX test/cpp_headers/vhost.o 00:04:04.439 CXX test/cpp_headers/vmd.o 00:04:04.439 CXX test/cpp_headers/xor.o 00:04:04.439 LINK esnap 00:04:04.439 CXX test/cpp_headers/zipf.o 00:04:04.698 00:04:04.698 real 1m27.931s 00:04:04.698 user 8m3.055s 00:04:04.698 sys 1m42.529s 00:04:04.698 10:47:51 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:04.698 10:47:51 make -- common/autotest_common.sh@10 -- $ set +x 00:04:04.698 ************************************ 00:04:04.698 END TEST make 00:04:04.698 ************************************ 00:04:04.698 10:47:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:04.698 10:47:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:04.698 10:47:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:04.698 10:47:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.698 10:47:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:04.698 10:47:51 -- pm/common@44 -- $ pid=5230 00:04:04.698 10:47:51 -- pm/common@50 -- $ kill -TERM 5230 00:04:04.698 10:47:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.698 10:47:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:04.698 10:47:51 -- pm/common@44 -- $ pid=5231 00:04:04.698 10:47:51 -- pm/common@50 -- $ kill -TERM 5231 00:04:04.698 10:47:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:04.698 10:47:51 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:04.698 10:47:51 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.698 10:47:51 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.957 10:47:51 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.957 10:47:51 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.957 10:47:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.957 10:47:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.957 10:47:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.957 10:47:51 -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.957 10:47:51 -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.957 10:47:51 -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.957 10:47:51 -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.957 10:47:51 -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.957 10:47:51 -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.957 10:47:51 -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.957 10:47:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.957 10:47:51 -- scripts/common.sh@344 -- # case "$op" in 00:04:04.958 10:47:51 -- scripts/common.sh@345 -- # : 1 00:04:04.958 10:47:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.958 10:47:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.958 10:47:51 -- scripts/common.sh@365 -- # decimal 1 00:04:04.958 10:47:51 -- scripts/common.sh@353 -- # local d=1 00:04:04.958 10:47:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.958 10:47:51 -- scripts/common.sh@355 -- # echo 1 00:04:04.958 10:47:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.958 10:47:51 -- scripts/common.sh@366 -- # decimal 2 00:04:04.958 10:47:51 -- scripts/common.sh@353 -- # local d=2 00:04:04.958 10:47:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.958 10:47:51 -- scripts/common.sh@355 -- # echo 2 00:04:04.958 10:47:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.958 10:47:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.958 10:47:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.958 10:47:51 -- scripts/common.sh@368 -- # return 0 00:04:04.958 10:47:51 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.958 10:47:51 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.958 --rc genhtml_branch_coverage=1 00:04:04.958 --rc genhtml_function_coverage=1 00:04:04.958 --rc genhtml_legend=1 00:04:04.958 --rc geninfo_all_blocks=1 00:04:04.958 --rc geninfo_unexecuted_blocks=1 00:04:04.958 00:04:04.958 ' 00:04:04.958 10:47:51 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.958 --rc genhtml_branch_coverage=1 00:04:04.958 --rc genhtml_function_coverage=1 00:04:04.958 --rc genhtml_legend=1 00:04:04.958 --rc geninfo_all_blocks=1 00:04:04.958 --rc geninfo_unexecuted_blocks=1 00:04:04.958 00:04:04.958 ' 00:04:04.958 10:47:51 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.958 --rc genhtml_branch_coverage=1 00:04:04.958 --rc genhtml_function_coverage=1 00:04:04.958 --rc genhtml_legend=1 00:04:04.958 --rc geninfo_all_blocks=1 00:04:04.958 --rc geninfo_unexecuted_blocks=1 00:04:04.958 00:04:04.958 ' 00:04:04.958 10:47:51 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.958 --rc genhtml_branch_coverage=1 00:04:04.958 --rc genhtml_function_coverage=1 00:04:04.958 --rc genhtml_legend=1 00:04:04.958 --rc geninfo_all_blocks=1 00:04:04.958 --rc geninfo_unexecuted_blocks=1 00:04:04.958 00:04:04.958 ' 00:04:04.958 10:47:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:04.958 10:47:51 -- nvmf/common.sh@7 -- # uname -s 00:04:04.958 10:47:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.958 10:47:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.958 10:47:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.958 10:47:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.958 10:47:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.958 10:47:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.958 10:47:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.958 10:47:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.958 10:47:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.958 10:47:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.958 10:47:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:04:04.958 10:47:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:04:04.958 10:47:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.958 10:47:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.958 10:47:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:04.958 10:47:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.958 10:47:51 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:04.958 10:47:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:04.958 10:47:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.958 10:47:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.958 10:47:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.958 10:47:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.958 10:47:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.958 10:47:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.958 10:47:51 -- paths/export.sh@5 -- # export PATH 00:04:04.958 10:47:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.958 10:47:51 -- nvmf/common.sh@51 -- # : 0 00:04:04.958 10:47:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:04.958 10:47:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:04.958 10:47:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.958 10:47:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.958 10:47:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.958 10:47:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:04.958 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:04.958 10:47:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:04.958 10:47:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:04.958 10:47:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:04.958 10:47:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:04.958 10:47:51 -- spdk/autotest.sh@32 -- # uname -s 00:04:04.958 10:47:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:04.958 10:47:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:04.958 10:47:51 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:04.958 10:47:51 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:04.958 10:47:51 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:04.958 10:47:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:04.958 10:47:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:04.958 10:47:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:04.958 10:47:51 -- spdk/autotest.sh@48 -- # udevadm_pid=54312 00:04:04.958 10:47:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:04.958 10:47:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:04.958 10:47:51 -- pm/common@17 -- # local monitor 00:04:04.958 10:47:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.958 10:47:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:04.958 10:47:51 -- pm/common@25 -- # sleep 1 00:04:04.958 10:47:51 -- pm/common@21 -- # date +%s 00:04:04.958 10:47:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731667671 00:04:04.958 10:47:51 -- pm/common@21 -- # date +%s 00:04:04.958 10:47:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731667671 00:04:04.958 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731667671_collect-vmstat.pm.log 00:04:04.958 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731667671_collect-cpu-load.pm.log 00:04:06.338 10:47:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:06.338 10:47:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:06.338 10:47:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.338 10:47:52 -- common/autotest_common.sh@10 -- # set +x 00:04:06.338 10:47:52 -- spdk/autotest.sh@59 -- # create_test_list 00:04:06.338 10:47:52 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:06.338 10:47:52 -- common/autotest_common.sh@10 -- # set +x 00:04:06.338 10:47:52 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:06.338 10:47:52 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:06.338 10:47:52 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:06.338 10:47:52 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:06.338 10:47:52 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:06.338 10:47:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:06.338 10:47:52 -- common/autotest_common.sh@1457 -- # uname 00:04:06.338 10:47:52 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:06.338 10:47:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:06.338 10:47:52 -- common/autotest_common.sh@1477 -- # uname 00:04:06.338 10:47:52 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:06.338 10:47:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:06.338 10:47:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:06.338 lcov: LCOV version 1.15 00:04:06.338 10:47:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:21.254 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:21.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:39.481 10:48:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:39.481 10:48:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.481 10:48:23 -- common/autotest_common.sh@10 -- # set +x 00:04:39.481 10:48:23 -- spdk/autotest.sh@78 -- # rm -f 00:04:39.481 10:48:23 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.481 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:39.481 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:39.481 10:48:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:39.481 10:48:23 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:39.481 10:48:23 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:39.481 10:48:23 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:39.481 10:48:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:39.481 10:48:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:39.481 10:48:23 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:39.481 10:48:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.481 10:48:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:39.481 10:48:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:39.481 10:48:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:39.481 10:48:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:39.481 10:48:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.481 10:48:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:39.481 10:48:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:39.481 10:48:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:39.481 10:48:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:39.481 10:48:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:39.481 10:48:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:39.481 10:48:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:39.481 10:48:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:39.481 10:48:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:39.481 10:48:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:39.481 10:48:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:39.481 10:48:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:39.481 10:48:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.481 10:48:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.481 10:48:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:39.481 10:48:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:39.481 10:48:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:39.481 No valid GPT data, bailing 00:04:39.481 10:48:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.481 10:48:23 -- scripts/common.sh@394 -- # pt= 00:04:39.481 10:48:23 -- scripts/common.sh@395 -- # return 1 00:04:39.481 10:48:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:39.481 1+0 records in 00:04:39.481 1+0 records out 00:04:39.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047484 s, 221 MB/s 00:04:39.481 10:48:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.481 10:48:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.481 10:48:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:39.481 10:48:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:39.481 10:48:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:39.481 No valid GPT data, bailing 00:04:39.481 10:48:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:39.481 10:48:24 -- scripts/common.sh@394 -- # pt= 00:04:39.481 10:48:24 -- scripts/common.sh@395 -- # return 1 00:04:39.481 10:48:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:39.481 1+0 records in 00:04:39.481 1+0 records out 00:04:39.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387865 s, 270 MB/s 00:04:39.481 10:48:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.481 10:48:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.481 10:48:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:39.481 10:48:24 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:39.481 10:48:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:39.481 No valid GPT data, bailing 00:04:39.481 10:48:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:39.481 10:48:24 -- scripts/common.sh@394 -- # pt= 00:04:39.481 10:48:24 -- scripts/common.sh@395 -- # return 1 00:04:39.481 10:48:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:39.481 1+0 records in 00:04:39.481 1+0 records out 00:04:39.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047889 s, 219 MB/s 00:04:39.481 10:48:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.481 10:48:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.481 10:48:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:39.481 10:48:24 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:39.481 10:48:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:39.481 No valid GPT data, bailing 00:04:39.481 10:48:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:39.481 10:48:24 -- scripts/common.sh@394 -- # pt= 00:04:39.481 10:48:24 -- scripts/common.sh@395 -- # return 1 00:04:39.481 10:48:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:39.481 1+0 records in 00:04:39.481 1+0 records out 00:04:39.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385315 s, 272 MB/s 00:04:39.481 10:48:24 -- spdk/autotest.sh@105 -- # sync 00:04:39.481 10:48:24 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:39.481 10:48:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:39.481 10:48:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:39.481 10:48:26 -- spdk/autotest.sh@111 -- # uname -s 00:04:39.481 10:48:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:39.481 10:48:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:39.481 10:48:26 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:40.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.049 Hugepages 00:04:40.049 node hugesize free / total 00:04:40.049 node0 1048576kB 0 / 0 00:04:40.049 node0 2048kB 0 / 0 00:04:40.049 00:04:40.049 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:40.049 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:40.308 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:40.308 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:40.308 10:48:27 -- spdk/autotest.sh@117 -- # uname -s 00:04:40.308 10:48:27 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:40.308 10:48:27 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:40.308 10:48:27 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.876 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.135 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.135 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.135 10:48:27 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:42.072 10:48:28 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:42.072 10:48:28 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:42.072 10:48:28 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:42.072 10:48:28 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:42.072 10:48:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:42.072 10:48:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:42.072 10:48:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:42.072 10:48:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:42.072 10:48:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:42.330 10:48:28 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:42.330 10:48:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:42.330 10:48:28 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.588 Waiting for block devices as requested 00:04:42.588 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.847 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.847 10:48:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:42.847 10:48:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:42.847 10:48:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:42.847 10:48:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:42.847 10:48:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:42.847 10:48:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:42.847 10:48:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:42.847 10:48:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:42.848 10:48:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:42.848 10:48:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:42.848 10:48:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:42.848 10:48:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:42.848 10:48:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:42.848 10:48:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:42.848 10:48:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:42.848 10:48:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:42.848 10:48:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:42.848 10:48:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:42.848 10:48:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:42.848 10:48:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:42.848 10:48:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:42.848 10:48:29 -- common/autotest_common.sh@1543 -- # continue 00:04:42.848 10:48:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:42.848 10:48:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:42.848 10:48:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:42.848 10:48:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:42.848 10:48:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:42.848 10:48:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:42.848 10:48:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:42.848 10:48:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:42.848 10:48:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:42.848 10:48:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:42.848 10:48:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:42.848 10:48:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:42.848 10:48:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:42.848 10:48:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:42.848 10:48:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:42.848 10:48:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:42.848 10:48:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:42.848 10:48:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:42.848 10:48:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:42.848 10:48:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:42.848 10:48:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:42.848 10:48:29 -- common/autotest_common.sh@1543 -- # continue 00:04:42.848 10:48:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:42.848 10:48:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.848 10:48:29 -- common/autotest_common.sh@10 -- # set +x 00:04:42.848 10:48:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:42.848 10:48:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.848 10:48:29 -- common/autotest_common.sh@10 -- # set +x 00:04:42.848 10:48:29 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.782 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.782 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.782 10:48:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:43.782 10:48:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.782 10:48:30 -- common/autotest_common.sh@10 -- # set +x 00:04:43.782 10:48:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:43.782 10:48:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:43.782 10:48:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.782 10:48:30 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:43.782 10:48:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:43.782 10:48:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:43.782 10:48:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:43.782 10:48:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:43.782 10:48:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:43.782 10:48:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:43.782 10:48:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.782 10:48:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:43.782 10:48:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:43.782 10:48:30 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:43.782 10:48:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:43.782 10:48:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:43.782 10:48:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:43.782 10:48:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:43.783 10:48:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.783 10:48:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:43.783 10:48:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:43.783 10:48:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:43.783 10:48:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.783 10:48:30 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:43.783 10:48:30 -- common/autotest_common.sh@1572 -- # return 0 00:04:43.783 10:48:30 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:43.783 10:48:30 -- common/autotest_common.sh@1580 -- # return 0 00:04:43.783 10:48:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:43.783 10:48:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:43.783 10:48:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:43.783 10:48:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:43.783 10:48:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:43.783 10:48:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.783 10:48:30 -- common/autotest_common.sh@10 -- # set +x 00:04:43.783 10:48:30 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:43.783 10:48:30 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:43.783 10:48:30 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:43.783 10:48:30 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:43.783 10:48:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.783 10:48:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.783 10:48:30 -- common/autotest_common.sh@10 -- # set +x 00:04:43.783 ************************************ 00:04:43.783 START TEST env 00:04:43.783 ************************************ 00:04:43.783 10:48:30 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:44.041 * Looking for test storage... 00:04:44.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:44.041 10:48:30 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.041 10:48:30 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.041 10:48:30 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.041 10:48:30 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.041 10:48:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.041 10:48:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.041 10:48:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.041 10:48:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.041 10:48:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.041 10:48:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.041 10:48:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.041 10:48:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.041 10:48:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.041 10:48:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.041 10:48:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.041 10:48:30 env -- scripts/common.sh@344 -- # case "$op" in 00:04:44.041 10:48:30 env -- scripts/common.sh@345 -- # : 1 00:04:44.041 10:48:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.041 10:48:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.041 10:48:30 env -- scripts/common.sh@365 -- # decimal 1 00:04:44.041 10:48:30 env -- scripts/common.sh@353 -- # local d=1 00:04:44.041 10:48:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.041 10:48:30 env -- scripts/common.sh@355 -- # echo 1 00:04:44.041 10:48:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.041 10:48:30 env -- scripts/common.sh@366 -- # decimal 2 00:04:44.041 10:48:30 env -- scripts/common.sh@353 -- # local d=2 00:04:44.041 10:48:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.041 10:48:30 env -- scripts/common.sh@355 -- # echo 2 00:04:44.041 10:48:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.041 10:48:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.041 10:48:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.041 10:48:30 env -- scripts/common.sh@368 -- # return 0 00:04:44.041 10:48:30 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.041 10:48:30 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.041 --rc genhtml_branch_coverage=1 00:04:44.041 --rc genhtml_function_coverage=1 00:04:44.041 --rc genhtml_legend=1 00:04:44.041 --rc geninfo_all_blocks=1 00:04:44.041 --rc geninfo_unexecuted_blocks=1 00:04:44.041 00:04:44.041 ' 00:04:44.041 10:48:30 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.041 --rc genhtml_branch_coverage=1 00:04:44.042 --rc genhtml_function_coverage=1 00:04:44.042 --rc genhtml_legend=1 00:04:44.042 --rc geninfo_all_blocks=1 00:04:44.042 --rc geninfo_unexecuted_blocks=1 00:04:44.042 00:04:44.042 ' 00:04:44.042 10:48:30 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.042 --rc genhtml_branch_coverage=1 00:04:44.042 --rc genhtml_function_coverage=1 00:04:44.042 --rc genhtml_legend=1 00:04:44.042 --rc geninfo_all_blocks=1 00:04:44.042 --rc geninfo_unexecuted_blocks=1 00:04:44.042 00:04:44.042 ' 00:04:44.042 10:48:30 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.042 --rc genhtml_branch_coverage=1 00:04:44.042 --rc genhtml_function_coverage=1 00:04:44.042 --rc genhtml_legend=1 00:04:44.042 --rc geninfo_all_blocks=1 00:04:44.042 --rc geninfo_unexecuted_blocks=1 00:04:44.042 00:04:44.042 ' 00:04:44.042 10:48:30 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:44.042 10:48:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.042 10:48:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.042 10:48:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.042 ************************************ 00:04:44.042 START TEST env_memory 00:04:44.042 ************************************ 00:04:44.042 10:48:30 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:44.042 00:04:44.042 00:04:44.042 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.042 http://cunit.sourceforge.net/ 00:04:44.042 00:04:44.042 00:04:44.042 Suite: memory 00:04:44.042 Test: alloc and free memory map ...[2024-11-15 10:48:30.857030] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:44.042 passed 00:04:44.042 Test: mem map translation ...[2024-11-15 10:48:30.888622] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:44.042 [2024-11-15 10:48:30.888687] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:44.042 [2024-11-15 10:48:30.888747] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:44.042 [2024-11-15 10:48:30.888759] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:44.301 passed 00:04:44.301 Test: mem map registration ...[2024-11-15 10:48:30.952578] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:44.301 [2024-11-15 10:48:30.952636] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:44.301 passed 00:04:44.301 Test: mem map adjacent registrations ...passed 00:04:44.301 00:04:44.301 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.301 suites 1 1 n/a 0 0 00:04:44.301 tests 4 4 4 0 0 00:04:44.301 asserts 152 152 152 0 n/a 00:04:44.301 00:04:44.301 Elapsed time = 0.213 seconds 00:04:44.301 00:04:44.301 real 0m0.230s 00:04:44.301 user 0m0.213s 00:04:44.301 sys 0m0.014s 00:04:44.301 10:48:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.301 10:48:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:44.301 ************************************ 00:04:44.301 END TEST env_memory 00:04:44.301 ************************************ 00:04:44.301 10:48:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:44.301 10:48:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.301 10:48:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.301 10:48:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.301 ************************************ 00:04:44.301 START TEST env_vtophys 00:04:44.301 ************************************ 00:04:44.301 10:48:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:44.301 EAL: lib.eal log level changed from notice to debug 00:04:44.301 EAL: Detected lcore 0 as core 0 on socket 0 00:04:44.301 EAL: Detected lcore 1 as core 0 on socket 0 00:04:44.301 EAL: Detected lcore 2 as core 0 on socket 0 00:04:44.301 EAL: Detected lcore 3 as core 0 on socket 0 00:04:44.301 EAL: Detected lcore 4 as core 0 on socket 0 00:04:44.301 EAL: Detected lcore 5 as core 0 on socket 0 00:04:44.301 EAL: Detected lcore 6 as core 0 on socket 0 00:04:44.301 EAL: Detected lcore 7 as core 0 on socket 0 00:04:44.301 EAL: Detected lcore 8 as core 0 on socket 0 00:04:44.301 EAL: Detected lcore 9 as core 0 on socket 0 00:04:44.301 EAL: Maximum logical cores by configuration: 128 00:04:44.301 EAL: Detected CPU lcores: 10 00:04:44.301 EAL: Detected NUMA nodes: 1 00:04:44.301 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:44.301 EAL: Detected shared linkage of DPDK 00:04:44.301 EAL: No shared files mode enabled, IPC will be disabled 00:04:44.301 EAL: Selected IOVA mode 'PA' 00:04:44.301 EAL: Probing VFIO support... 00:04:44.301 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:44.301 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:44.301 EAL: Ask a virtual area of 0x2e000 bytes 00:04:44.301 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:44.301 EAL: Setting up physically contiguous memory... 00:04:44.301 EAL: Setting maximum number of open files to 524288 00:04:44.301 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:44.301 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:44.301 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.301 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:44.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.301 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.301 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:44.301 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:44.301 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.301 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:44.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.301 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.301 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:44.301 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:44.301 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.301 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:44.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.301 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.301 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:44.301 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:44.301 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.301 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:44.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.301 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.301 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:44.301 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:44.301 EAL: Hugepages will be freed exactly as allocated. 00:04:44.301 EAL: No shared files mode enabled, IPC is disabled 00:04:44.301 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: TSC frequency is ~2200000 KHz 00:04:44.559 EAL: Main lcore 0 is ready (tid=7f55f306ea00;cpuset=[0]) 00:04:44.559 EAL: Trying to obtain current memory policy. 00:04:44.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.559 EAL: Restoring previous memory policy: 0 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was expanded by 2MB 00:04:44.559 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:44.559 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:44.559 EAL: Mem event callback 'spdk:(nil)' registered 00:04:44.559 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:44.559 00:04:44.559 00:04:44.559 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.559 http://cunit.sourceforge.net/ 00:04:44.559 00:04:44.559 00:04:44.559 Suite: components_suite 00:04:44.559 Test: vtophys_malloc_test ...passed 00:04:44.559 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:44.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.559 EAL: Restoring previous memory policy: 4 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was expanded by 4MB 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was shrunk by 4MB 00:04:44.559 EAL: Trying to obtain current memory policy. 00:04:44.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.559 EAL: Restoring previous memory policy: 4 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was expanded by 6MB 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was shrunk by 6MB 00:04:44.559 EAL: Trying to obtain current memory policy. 00:04:44.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.559 EAL: Restoring previous memory policy: 4 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was expanded by 10MB 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was shrunk by 10MB 00:04:44.559 EAL: Trying to obtain current memory policy. 00:04:44.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.559 EAL: Restoring previous memory policy: 4 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was expanded by 18MB 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was shrunk by 18MB 00:04:44.559 EAL: Trying to obtain current memory policy. 00:04:44.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.559 EAL: Restoring previous memory policy: 4 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was expanded by 34MB 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was shrunk by 34MB 00:04:44.559 EAL: Trying to obtain current memory policy. 00:04:44.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.559 EAL: Restoring previous memory policy: 4 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was expanded by 66MB 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was shrunk by 66MB 00:04:44.559 EAL: Trying to obtain current memory policy. 00:04:44.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.559 EAL: Restoring previous memory policy: 4 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was expanded by 130MB 00:04:44.559 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.559 EAL: request: mp_malloc_sync 00:04:44.559 EAL: No shared files mode enabled, IPC is disabled 00:04:44.559 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.559 EAL: Trying to obtain current memory policy. 00:04:44.559 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.817 EAL: Restoring previous memory policy: 4 00:04:44.817 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.817 EAL: request: mp_malloc_sync 00:04:44.817 EAL: No shared files mode enabled, IPC is disabled 00:04:44.817 EAL: Heap on socket 0 was expanded by 258MB 00:04:44.817 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.817 EAL: request: mp_malloc_sync 00:04:44.817 EAL: No shared files mode enabled, IPC is disabled 00:04:44.817 EAL: Heap on socket 0 was shrunk by 258MB 00:04:44.817 EAL: Trying to obtain current memory policy. 00:04:44.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.170 EAL: Restoring previous memory policy: 4 00:04:45.170 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.170 EAL: request: mp_malloc_sync 00:04:45.170 EAL: No shared files mode enabled, IPC is disabled 00:04:45.170 EAL: Heap on socket 0 was expanded by 514MB 00:04:45.170 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.170 EAL: request: mp_malloc_sync 00:04:45.170 EAL: No shared files mode enabled, IPC is disabled 00:04:45.170 EAL: Heap on socket 0 was shrunk by 514MB 00:04:45.170 EAL: Trying to obtain current memory policy. 00:04:45.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.457 EAL: Restoring previous memory policy: 4 00:04:45.457 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.457 EAL: request: mp_malloc_sync 00:04:45.457 EAL: No shared files mode enabled, IPC is disabled 00:04:45.457 EAL: Heap on socket 0 was expanded by 1026MB 00:04:45.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.717 passed 00:04:45.717 00:04:45.717 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.717 suites 1 1 n/a 0 0 00:04:45.717 tests 2 2 2 0 0 00:04:45.717 asserts 5533 5533 5533 0 n/a 00:04:45.717 00:04:45.717 Elapsed time = 1.248 seconds 00:04:45.717 EAL: request: mp_malloc_sync 00:04:45.717 EAL: No shared files mode enabled, IPC is disabled 00:04:45.717 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:45.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.717 EAL: request: mp_malloc_sync 00:04:45.717 EAL: No shared files mode enabled, IPC is disabled 00:04:45.717 EAL: Heap on socket 0 was shrunk by 2MB 00:04:45.717 EAL: No shared files mode enabled, IPC is disabled 00:04:45.717 EAL: No shared files mode enabled, IPC is disabled 00:04:45.717 EAL: No shared files mode enabled, IPC is disabled 00:04:45.717 00:04:45.717 real 0m1.458s 00:04:45.717 user 0m0.815s 00:04:45.717 sys 0m0.510s 00:04:45.717 10:48:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.717 10:48:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:45.717 ************************************ 00:04:45.717 END TEST env_vtophys 00:04:45.717 ************************************ 00:04:45.976 10:48:32 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:45.976 10:48:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.976 10:48:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.976 10:48:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.976 ************************************ 00:04:45.976 START TEST env_pci 00:04:45.976 ************************************ 00:04:45.976 10:48:32 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:45.976 00:04:45.976 00:04:45.976 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.976 http://cunit.sourceforge.net/ 00:04:45.976 00:04:45.976 00:04:45.976 Suite: pci 00:04:45.976 Test: pci_hook ...[2024-11-15 10:48:32.616750] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56524 has claimed it 00:04:45.976 passed 00:04:45.976 00:04:45.976 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.976 suites 1 1 n/a 0 0 00:04:45.976 tests 1 1 1 0 0 00:04:45.976 asserts 25 25 25 0 n/a 00:04:45.976 00:04:45.976 Elapsed time = 0.002 seconds 00:04:45.976 EAL: Cannot find device (10000:00:01.0) 00:04:45.976 EAL: Failed to attach device on primary process 00:04:45.976 00:04:45.976 real 0m0.023s 00:04:45.976 user 0m0.010s 00:04:45.976 sys 0m0.012s 00:04:45.976 10:48:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.976 10:48:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:45.976 ************************************ 00:04:45.976 END TEST env_pci 00:04:45.976 ************************************ 00:04:45.976 10:48:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:45.976 10:48:32 env -- env/env.sh@15 -- # uname 00:04:45.976 10:48:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:45.976 10:48:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:45.976 10:48:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.976 10:48:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:45.976 10:48:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.976 10:48:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.976 ************************************ 00:04:45.976 START TEST env_dpdk_post_init 00:04:45.976 ************************************ 00:04:45.976 10:48:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.976 EAL: Detected CPU lcores: 10 00:04:45.976 EAL: Detected NUMA nodes: 1 00:04:45.976 EAL: Detected shared linkage of DPDK 00:04:45.976 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.976 EAL: Selected IOVA mode 'PA' 00:04:45.976 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:46.236 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:46.236 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:46.236 Starting DPDK initialization... 00:04:46.236 Starting SPDK post initialization... 00:04:46.236 SPDK NVMe probe 00:04:46.236 Attaching to 0000:00:10.0 00:04:46.236 Attaching to 0000:00:11.0 00:04:46.236 Attached to 0000:00:10.0 00:04:46.236 Attached to 0000:00:11.0 00:04:46.236 Cleaning up... 00:04:46.236 00:04:46.236 real 0m0.191s 00:04:46.236 user 0m0.055s 00:04:46.236 sys 0m0.036s 00:04:46.236 10:48:32 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.236 10:48:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:46.236 ************************************ 00:04:46.236 END TEST env_dpdk_post_init 00:04:46.236 ************************************ 00:04:46.236 10:48:32 env -- env/env.sh@26 -- # uname 00:04:46.236 10:48:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:46.236 10:48:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:46.236 10:48:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.236 10:48:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.236 10:48:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.236 ************************************ 00:04:46.236 START TEST env_mem_callbacks 00:04:46.236 ************************************ 00:04:46.236 10:48:32 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:46.236 EAL: Detected CPU lcores: 10 00:04:46.236 EAL: Detected NUMA nodes: 1 00:04:46.236 EAL: Detected shared linkage of DPDK 00:04:46.236 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:46.236 EAL: Selected IOVA mode 'PA' 00:04:46.236 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:46.236 00:04:46.236 00:04:46.236 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.236 http://cunit.sourceforge.net/ 00:04:46.236 00:04:46.236 00:04:46.236 Suite: memory 00:04:46.236 Test: test ... 00:04:46.236 register 0x200000200000 2097152 00:04:46.236 malloc 3145728 00:04:46.236 register 0x200000400000 4194304 00:04:46.236 buf 0x200000500000 len 3145728 PASSED 00:04:46.236 malloc 64 00:04:46.236 buf 0x2000004fff40 len 64 PASSED 00:04:46.236 malloc 4194304 00:04:46.236 register 0x200000800000 6291456 00:04:46.236 buf 0x200000a00000 len 4194304 PASSED 00:04:46.236 free 0x200000500000 3145728 00:04:46.236 free 0x2000004fff40 64 00:04:46.236 unregister 0x200000400000 4194304 PASSED 00:04:46.236 free 0x200000a00000 4194304 00:04:46.236 unregister 0x200000800000 6291456 PASSED 00:04:46.236 malloc 8388608 00:04:46.236 register 0x200000400000 10485760 00:04:46.236 buf 0x200000600000 len 8388608 PASSED 00:04:46.236 free 0x200000600000 8388608 00:04:46.236 unregister 0x200000400000 10485760 PASSED 00:04:46.236 passed 00:04:46.236 00:04:46.236 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.236 suites 1 1 n/a 0 0 00:04:46.236 tests 1 1 1 0 0 00:04:46.236 asserts 15 15 15 0 n/a 00:04:46.236 00:04:46.236 Elapsed time = 0.009 seconds 00:04:46.236 00:04:46.236 real 0m0.140s 00:04:46.236 user 0m0.016s 00:04:46.236 sys 0m0.023s 00:04:46.236 10:48:33 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.236 10:48:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:46.236 ************************************ 00:04:46.236 END TEST env_mem_callbacks 00:04:46.236 ************************************ 00:04:46.495 ************************************ 00:04:46.495 END TEST env 00:04:46.495 ************************************ 00:04:46.495 00:04:46.495 real 0m2.493s 00:04:46.495 user 0m1.326s 00:04:46.495 sys 0m0.826s 00:04:46.495 10:48:33 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.495 10:48:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.495 10:48:33 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:46.495 10:48:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.495 10:48:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.495 10:48:33 -- common/autotest_common.sh@10 -- # set +x 00:04:46.495 ************************************ 00:04:46.495 START TEST rpc 00:04:46.495 ************************************ 00:04:46.496 10:48:33 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:46.496 * Looking for test storage... 00:04:46.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.496 10:48:33 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.496 10:48:33 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.496 10:48:33 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.496 10:48:33 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.496 10:48:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.496 10:48:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.496 10:48:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.496 10:48:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.496 10:48:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.496 10:48:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.496 10:48:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.496 10:48:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.496 10:48:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.496 10:48:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.496 10:48:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.496 10:48:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.496 10:48:33 rpc -- scripts/common.sh@345 -- # : 1 00:04:46.496 10:48:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.496 10:48:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.496 10:48:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.496 10:48:33 rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.496 10:48:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.496 10:48:33 rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.756 10:48:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.756 10:48:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.756 10:48:33 rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.756 10:48:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.756 10:48:33 rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.756 10:48:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.756 10:48:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.756 10:48:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.756 10:48:33 rpc -- scripts/common.sh@368 -- # return 0 00:04:46.756 10:48:33 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.756 10:48:33 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.756 --rc genhtml_branch_coverage=1 00:04:46.756 --rc genhtml_function_coverage=1 00:04:46.756 --rc genhtml_legend=1 00:04:46.756 --rc geninfo_all_blocks=1 00:04:46.756 --rc geninfo_unexecuted_blocks=1 00:04:46.756 00:04:46.756 ' 00:04:46.756 10:48:33 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.756 --rc genhtml_branch_coverage=1 00:04:46.756 --rc genhtml_function_coverage=1 00:04:46.756 --rc genhtml_legend=1 00:04:46.756 --rc geninfo_all_blocks=1 00:04:46.756 --rc geninfo_unexecuted_blocks=1 00:04:46.756 00:04:46.756 ' 00:04:46.756 10:48:33 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.756 --rc genhtml_branch_coverage=1 00:04:46.756 --rc genhtml_function_coverage=1 00:04:46.756 --rc genhtml_legend=1 00:04:46.756 --rc geninfo_all_blocks=1 00:04:46.756 --rc geninfo_unexecuted_blocks=1 00:04:46.756 00:04:46.756 ' 00:04:46.756 10:48:33 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.756 --rc genhtml_branch_coverage=1 00:04:46.756 --rc genhtml_function_coverage=1 00:04:46.756 --rc genhtml_legend=1 00:04:46.756 --rc geninfo_all_blocks=1 00:04:46.756 --rc geninfo_unexecuted_blocks=1 00:04:46.756 00:04:46.756 ' 00:04:46.756 10:48:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56647 00:04:46.756 10:48:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.756 10:48:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56647 00:04:46.756 10:48:33 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:46.756 10:48:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 56647 ']' 00:04:46.756 10:48:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.756 10:48:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.756 10:48:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.756 10:48:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.756 10:48:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.756 [2024-11-15 10:48:33.437214] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:04:46.757 [2024-11-15 10:48:33.437321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56647 ] 00:04:46.757 [2024-11-15 10:48:33.584110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.015 [2024-11-15 10:48:33.627999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:47.015 [2024-11-15 10:48:33.628071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56647' to capture a snapshot of events at runtime. 00:04:47.015 [2024-11-15 10:48:33.628097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:47.015 [2024-11-15 10:48:33.628105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:47.015 [2024-11-15 10:48:33.628111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56647 for offline analysis/debug. 00:04:47.015 [2024-11-15 10:48:33.628498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.016 [2024-11-15 10:48:33.695826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:47.275 10:48:33 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.275 10:48:33 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:47.275 10:48:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.275 10:48:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.275 10:48:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:47.275 10:48:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:47.275 10:48:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.275 10:48:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.275 10:48:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.275 ************************************ 00:04:47.275 START TEST rpc_integrity 00:04:47.275 ************************************ 00:04:47.275 10:48:33 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:47.275 10:48:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:47.275 10:48:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.275 10:48:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.275 10:48:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.275 10:48:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:47.275 10:48:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:47.275 10:48:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:47.275 10:48:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:47.275 10:48:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.275 10:48:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.275 10:48:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.275 10:48:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:47.275 10:48:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:47.275 10:48:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.275 10:48:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.275 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.275 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:47.275 { 00:04:47.275 "name": "Malloc0", 00:04:47.275 "aliases": [ 00:04:47.275 "e468e551-9847-4c44-810a-98ac49d1a0f8" 00:04:47.275 ], 00:04:47.275 "product_name": "Malloc disk", 00:04:47.275 "block_size": 512, 00:04:47.275 "num_blocks": 16384, 00:04:47.275 "uuid": "e468e551-9847-4c44-810a-98ac49d1a0f8", 00:04:47.275 "assigned_rate_limits": { 00:04:47.275 "rw_ios_per_sec": 0, 00:04:47.275 "rw_mbytes_per_sec": 0, 00:04:47.275 "r_mbytes_per_sec": 0, 00:04:47.275 "w_mbytes_per_sec": 0 00:04:47.275 }, 00:04:47.275 "claimed": false, 00:04:47.275 "zoned": false, 00:04:47.275 "supported_io_types": { 00:04:47.275 "read": true, 00:04:47.275 "write": true, 00:04:47.275 "unmap": true, 00:04:47.275 "flush": true, 00:04:47.275 "reset": true, 00:04:47.275 "nvme_admin": false, 00:04:47.275 "nvme_io": false, 00:04:47.275 "nvme_io_md": false, 00:04:47.275 "write_zeroes": true, 00:04:47.275 "zcopy": true, 00:04:47.275 "get_zone_info": false, 00:04:47.275 "zone_management": false, 00:04:47.275 "zone_append": false, 00:04:47.275 "compare": false, 00:04:47.275 "compare_and_write": false, 00:04:47.275 "abort": true, 00:04:47.275 "seek_hole": false, 00:04:47.275 "seek_data": false, 00:04:47.275 "copy": true, 00:04:47.275 "nvme_iov_md": false 00:04:47.275 }, 00:04:47.275 "memory_domains": [ 00:04:47.275 { 00:04:47.275 "dma_device_id": "system", 00:04:47.275 "dma_device_type": 1 00:04:47.275 }, 00:04:47.275 { 00:04:47.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.275 "dma_device_type": 2 00:04:47.275 } 00:04:47.275 ], 00:04:47.275 "driver_specific": {} 00:04:47.275 } 00:04:47.275 ]' 00:04:47.275 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:47.275 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:47.275 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:47.275 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.275 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.275 [2024-11-15 10:48:34.072622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:47.275 [2024-11-15 10:48:34.072681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:47.275 [2024-11-15 10:48:34.072709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2097f10 00:04:47.275 [2024-11-15 10:48:34.072724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:47.276 [2024-11-15 10:48:34.074466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:47.276 [2024-11-15 10:48:34.074504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:47.276 Passthru0 00:04:47.276 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.276 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:47.276 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.276 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.276 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.276 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:47.276 { 00:04:47.276 "name": "Malloc0", 00:04:47.276 "aliases": [ 00:04:47.276 "e468e551-9847-4c44-810a-98ac49d1a0f8" 00:04:47.276 ], 00:04:47.276 "product_name": "Malloc disk", 00:04:47.276 "block_size": 512, 00:04:47.276 "num_blocks": 16384, 00:04:47.276 "uuid": "e468e551-9847-4c44-810a-98ac49d1a0f8", 00:04:47.276 "assigned_rate_limits": { 00:04:47.276 "rw_ios_per_sec": 0, 00:04:47.276 "rw_mbytes_per_sec": 0, 00:04:47.276 "r_mbytes_per_sec": 0, 00:04:47.276 "w_mbytes_per_sec": 0 00:04:47.276 }, 00:04:47.276 "claimed": true, 00:04:47.276 "claim_type": "exclusive_write", 00:04:47.276 "zoned": false, 00:04:47.276 "supported_io_types": { 00:04:47.276 "read": true, 00:04:47.276 "write": true, 00:04:47.276 "unmap": true, 00:04:47.276 "flush": true, 00:04:47.276 "reset": true, 00:04:47.276 "nvme_admin": false, 00:04:47.276 "nvme_io": false, 00:04:47.276 "nvme_io_md": false, 00:04:47.276 "write_zeroes": true, 00:04:47.276 "zcopy": true, 00:04:47.276 "get_zone_info": false, 00:04:47.276 "zone_management": false, 00:04:47.276 "zone_append": false, 00:04:47.276 "compare": false, 00:04:47.276 "compare_and_write": false, 00:04:47.276 "abort": true, 00:04:47.276 "seek_hole": false, 00:04:47.276 "seek_data": false, 00:04:47.276 "copy": true, 00:04:47.276 "nvme_iov_md": false 00:04:47.276 }, 00:04:47.276 "memory_domains": [ 00:04:47.276 { 00:04:47.276 "dma_device_id": "system", 00:04:47.276 "dma_device_type": 1 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.276 "dma_device_type": 2 00:04:47.276 } 00:04:47.276 ], 00:04:47.276 "driver_specific": {} 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "name": "Passthru0", 00:04:47.276 "aliases": [ 00:04:47.276 "88f48be7-3fcc-5174-840c-8256933600c0" 00:04:47.276 ], 00:04:47.276 "product_name": "passthru", 00:04:47.276 "block_size": 512, 00:04:47.276 "num_blocks": 16384, 00:04:47.276 "uuid": "88f48be7-3fcc-5174-840c-8256933600c0", 00:04:47.276 "assigned_rate_limits": { 00:04:47.276 "rw_ios_per_sec": 0, 00:04:47.276 "rw_mbytes_per_sec": 0, 00:04:47.276 "r_mbytes_per_sec": 0, 00:04:47.276 "w_mbytes_per_sec": 0 00:04:47.276 }, 00:04:47.276 "claimed": false, 00:04:47.276 "zoned": false, 00:04:47.276 "supported_io_types": { 00:04:47.276 "read": true, 00:04:47.276 "write": true, 00:04:47.276 "unmap": true, 00:04:47.276 "flush": true, 00:04:47.276 "reset": true, 00:04:47.276 "nvme_admin": false, 00:04:47.276 "nvme_io": false, 00:04:47.276 "nvme_io_md": false, 00:04:47.276 "write_zeroes": true, 00:04:47.276 "zcopy": true, 00:04:47.276 "get_zone_info": false, 00:04:47.276 "zone_management": false, 00:04:47.276 "zone_append": false, 00:04:47.276 "compare": false, 00:04:47.276 "compare_and_write": false, 00:04:47.276 "abort": true, 00:04:47.276 "seek_hole": false, 00:04:47.276 "seek_data": false, 00:04:47.276 "copy": true, 00:04:47.276 "nvme_iov_md": false 00:04:47.276 }, 00:04:47.276 "memory_domains": [ 00:04:47.276 { 00:04:47.276 "dma_device_id": "system", 00:04:47.276 "dma_device_type": 1 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.276 "dma_device_type": 2 00:04:47.276 } 00:04:47.276 ], 00:04:47.276 "driver_specific": { 00:04:47.276 "passthru": { 00:04:47.276 "name": "Passthru0", 00:04:47.276 "base_bdev_name": "Malloc0" 00:04:47.276 } 00:04:47.276 } 00:04:47.276 } 00:04:47.276 ]' 00:04:47.276 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:47.535 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:47.535 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:47.535 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.535 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.535 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.535 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:47.535 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.535 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.535 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.535 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:47.535 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.535 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.535 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.535 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:47.535 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:47.535 10:48:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:47.535 00:04:47.535 real 0m0.329s 00:04:47.535 user 0m0.224s 00:04:47.535 sys 0m0.039s 00:04:47.535 ************************************ 00:04:47.535 END TEST rpc_integrity 00:04:47.535 ************************************ 00:04:47.535 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.535 10:48:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.535 10:48:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:47.535 10:48:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.535 10:48:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.535 10:48:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.535 ************************************ 00:04:47.535 START TEST rpc_plugins 00:04:47.535 ************************************ 00:04:47.535 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:47.535 10:48:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:47.535 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.535 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:47.535 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.535 10:48:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:47.535 10:48:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:47.535 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.535 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:47.535 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.535 10:48:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:47.535 { 00:04:47.535 "name": "Malloc1", 00:04:47.535 "aliases": [ 00:04:47.535 "aa3d9964-0d70-46fa-bb42-ee20d3dde652" 00:04:47.535 ], 00:04:47.535 "product_name": "Malloc disk", 00:04:47.535 "block_size": 4096, 00:04:47.535 "num_blocks": 256, 00:04:47.535 "uuid": "aa3d9964-0d70-46fa-bb42-ee20d3dde652", 00:04:47.535 "assigned_rate_limits": { 00:04:47.535 "rw_ios_per_sec": 0, 00:04:47.535 "rw_mbytes_per_sec": 0, 00:04:47.535 "r_mbytes_per_sec": 0, 00:04:47.535 "w_mbytes_per_sec": 0 00:04:47.535 }, 00:04:47.535 "claimed": false, 00:04:47.535 "zoned": false, 00:04:47.535 "supported_io_types": { 00:04:47.535 "read": true, 00:04:47.535 "write": true, 00:04:47.535 "unmap": true, 00:04:47.535 "flush": true, 00:04:47.535 "reset": true, 00:04:47.535 "nvme_admin": false, 00:04:47.535 "nvme_io": false, 00:04:47.535 "nvme_io_md": false, 00:04:47.535 "write_zeroes": true, 00:04:47.535 "zcopy": true, 00:04:47.535 "get_zone_info": false, 00:04:47.535 "zone_management": false, 00:04:47.535 "zone_append": false, 00:04:47.535 "compare": false, 00:04:47.535 "compare_and_write": false, 00:04:47.535 "abort": true, 00:04:47.535 "seek_hole": false, 00:04:47.535 "seek_data": false, 00:04:47.535 "copy": true, 00:04:47.535 "nvme_iov_md": false 00:04:47.535 }, 00:04:47.535 "memory_domains": [ 00:04:47.535 { 00:04:47.535 "dma_device_id": "system", 00:04:47.535 "dma_device_type": 1 00:04:47.535 }, 00:04:47.535 { 00:04:47.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.535 "dma_device_type": 2 00:04:47.535 } 00:04:47.535 ], 00:04:47.535 "driver_specific": {} 00:04:47.535 } 00:04:47.535 ]' 00:04:47.535 10:48:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:47.535 10:48:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:47.535 10:48:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:47.795 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.795 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:47.795 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.795 10:48:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:47.795 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.795 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:47.795 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.795 10:48:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:47.795 10:48:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:47.795 10:48:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:47.795 00:04:47.795 real 0m0.165s 00:04:47.795 user 0m0.106s 00:04:47.795 sys 0m0.021s 00:04:47.795 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.795 ************************************ 00:04:47.795 END TEST rpc_plugins 00:04:47.795 10:48:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:47.795 ************************************ 00:04:47.795 10:48:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:47.795 10:48:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.795 10:48:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.795 10:48:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.795 ************************************ 00:04:47.795 START TEST rpc_trace_cmd_test 00:04:47.795 ************************************ 00:04:47.795 10:48:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:47.795 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:47.795 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:47.795 10:48:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.795 10:48:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:47.795 10:48:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.795 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:47.795 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56647", 00:04:47.795 "tpoint_group_mask": "0x8", 00:04:47.795 "iscsi_conn": { 00:04:47.795 "mask": "0x2", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "scsi": { 00:04:47.795 "mask": "0x4", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "bdev": { 00:04:47.795 "mask": "0x8", 00:04:47.795 "tpoint_mask": "0xffffffffffffffff" 00:04:47.795 }, 00:04:47.795 "nvmf_rdma": { 00:04:47.795 "mask": "0x10", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "nvmf_tcp": { 00:04:47.795 "mask": "0x20", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "ftl": { 00:04:47.795 "mask": "0x40", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "blobfs": { 00:04:47.795 "mask": "0x80", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "dsa": { 00:04:47.795 "mask": "0x200", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "thread": { 00:04:47.795 "mask": "0x400", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "nvme_pcie": { 00:04:47.795 "mask": "0x800", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "iaa": { 00:04:47.795 "mask": "0x1000", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "nvme_tcp": { 00:04:47.795 "mask": "0x2000", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "bdev_nvme": { 00:04:47.795 "mask": "0x4000", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.795 "sock": { 00:04:47.795 "mask": "0x8000", 00:04:47.795 "tpoint_mask": "0x0" 00:04:47.795 }, 00:04:47.796 "blob": { 00:04:47.796 "mask": "0x10000", 00:04:47.796 "tpoint_mask": "0x0" 00:04:47.796 }, 00:04:47.796 "bdev_raid": { 00:04:47.796 "mask": "0x20000", 00:04:47.796 "tpoint_mask": "0x0" 00:04:47.796 }, 00:04:47.796 "scheduler": { 00:04:47.796 "mask": "0x40000", 00:04:47.796 "tpoint_mask": "0x0" 00:04:47.796 } 00:04:47.796 }' 00:04:47.796 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:47.796 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:47.796 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:47.796 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:47.796 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:48.063 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:48.063 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:48.063 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:48.063 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:48.063 10:48:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:48.063 00:04:48.063 real 0m0.253s 00:04:48.063 user 0m0.219s 00:04:48.063 sys 0m0.025s 00:04:48.063 10:48:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.063 ************************************ 00:04:48.063 END TEST rpc_trace_cmd_test 00:04:48.063 10:48:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:48.063 ************************************ 00:04:48.063 10:48:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:48.063 10:48:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:48.063 10:48:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:48.063 10:48:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.063 10:48:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.063 10:48:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.063 ************************************ 00:04:48.063 START TEST rpc_daemon_integrity 00:04:48.063 ************************************ 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:48.063 { 00:04:48.063 "name": "Malloc2", 00:04:48.063 "aliases": [ 00:04:48.063 "2fbd6a5a-b0c3-4c6f-a8a2-0730381c7039" 00:04:48.063 ], 00:04:48.063 "product_name": "Malloc disk", 00:04:48.063 "block_size": 512, 00:04:48.063 "num_blocks": 16384, 00:04:48.063 "uuid": "2fbd6a5a-b0c3-4c6f-a8a2-0730381c7039", 00:04:48.063 "assigned_rate_limits": { 00:04:48.063 "rw_ios_per_sec": 0, 00:04:48.063 "rw_mbytes_per_sec": 0, 00:04:48.063 "r_mbytes_per_sec": 0, 00:04:48.063 "w_mbytes_per_sec": 0 00:04:48.063 }, 00:04:48.063 "claimed": false, 00:04:48.063 "zoned": false, 00:04:48.063 "supported_io_types": { 00:04:48.063 "read": true, 00:04:48.063 "write": true, 00:04:48.063 "unmap": true, 00:04:48.063 "flush": true, 00:04:48.063 "reset": true, 00:04:48.063 "nvme_admin": false, 00:04:48.063 "nvme_io": false, 00:04:48.063 "nvme_io_md": false, 00:04:48.063 "write_zeroes": true, 00:04:48.063 "zcopy": true, 00:04:48.063 "get_zone_info": false, 00:04:48.063 "zone_management": false, 00:04:48.063 "zone_append": false, 00:04:48.063 "compare": false, 00:04:48.063 "compare_and_write": false, 00:04:48.063 "abort": true, 00:04:48.063 "seek_hole": false, 00:04:48.063 "seek_data": false, 00:04:48.063 "copy": true, 00:04:48.063 "nvme_iov_md": false 00:04:48.063 }, 00:04:48.063 "memory_domains": [ 00:04:48.063 { 00:04:48.063 "dma_device_id": "system", 00:04:48.063 "dma_device_type": 1 00:04:48.063 }, 00:04:48.063 { 00:04:48.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.063 "dma_device_type": 2 00:04:48.063 } 00:04:48.063 ], 00:04:48.063 "driver_specific": {} 00:04:48.063 } 00:04:48.063 ]' 00:04:48.063 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:48.338 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:48.338 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:48.338 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.338 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.338 [2024-11-15 10:48:34.965458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:48.338 [2024-11-15 10:48:34.965518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:48.338 [2024-11-15 10:48:34.965549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2232980 00:04:48.338 [2024-11-15 10:48:34.965561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:48.338 [2024-11-15 10:48:34.967347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:48.338 [2024-11-15 10:48:34.967384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:48.338 Passthru0 00:04:48.338 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.338 10:48:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:48.338 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.338 10:48:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.338 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.338 10:48:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:48.338 { 00:04:48.338 "name": "Malloc2", 00:04:48.338 "aliases": [ 00:04:48.338 "2fbd6a5a-b0c3-4c6f-a8a2-0730381c7039" 00:04:48.338 ], 00:04:48.338 "product_name": "Malloc disk", 00:04:48.338 "block_size": 512, 00:04:48.338 "num_blocks": 16384, 00:04:48.338 "uuid": "2fbd6a5a-b0c3-4c6f-a8a2-0730381c7039", 00:04:48.338 "assigned_rate_limits": { 00:04:48.338 "rw_ios_per_sec": 0, 00:04:48.338 "rw_mbytes_per_sec": 0, 00:04:48.338 "r_mbytes_per_sec": 0, 00:04:48.338 "w_mbytes_per_sec": 0 00:04:48.338 }, 00:04:48.338 "claimed": true, 00:04:48.338 "claim_type": "exclusive_write", 00:04:48.338 "zoned": false, 00:04:48.338 "supported_io_types": { 00:04:48.338 "read": true, 00:04:48.338 "write": true, 00:04:48.338 "unmap": true, 00:04:48.338 "flush": true, 00:04:48.338 "reset": true, 00:04:48.338 "nvme_admin": false, 00:04:48.338 "nvme_io": false, 00:04:48.338 "nvme_io_md": false, 00:04:48.338 "write_zeroes": true, 00:04:48.338 "zcopy": true, 00:04:48.338 "get_zone_info": false, 00:04:48.338 "zone_management": false, 00:04:48.338 "zone_append": false, 00:04:48.338 "compare": false, 00:04:48.338 "compare_and_write": false, 00:04:48.338 "abort": true, 00:04:48.338 "seek_hole": false, 00:04:48.338 "seek_data": false, 00:04:48.338 "copy": true, 00:04:48.338 "nvme_iov_md": false 00:04:48.338 }, 00:04:48.338 "memory_domains": [ 00:04:48.338 { 00:04:48.338 "dma_device_id": "system", 00:04:48.338 "dma_device_type": 1 00:04:48.338 }, 00:04:48.338 { 00:04:48.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.338 "dma_device_type": 2 00:04:48.338 } 00:04:48.338 ], 00:04:48.338 "driver_specific": {} 00:04:48.338 }, 00:04:48.338 { 00:04:48.338 "name": "Passthru0", 00:04:48.338 "aliases": [ 00:04:48.338 "060f8faf-6a24-5a6d-a3b5-0ba878496262" 00:04:48.338 ], 00:04:48.339 "product_name": "passthru", 00:04:48.339 "block_size": 512, 00:04:48.339 "num_blocks": 16384, 00:04:48.339 "uuid": "060f8faf-6a24-5a6d-a3b5-0ba878496262", 00:04:48.339 "assigned_rate_limits": { 00:04:48.339 "rw_ios_per_sec": 0, 00:04:48.339 "rw_mbytes_per_sec": 0, 00:04:48.339 "r_mbytes_per_sec": 0, 00:04:48.339 "w_mbytes_per_sec": 0 00:04:48.339 }, 00:04:48.339 "claimed": false, 00:04:48.339 "zoned": false, 00:04:48.339 "supported_io_types": { 00:04:48.339 "read": true, 00:04:48.339 "write": true, 00:04:48.339 "unmap": true, 00:04:48.339 "flush": true, 00:04:48.339 "reset": true, 00:04:48.339 "nvme_admin": false, 00:04:48.339 "nvme_io": false, 00:04:48.339 "nvme_io_md": false, 00:04:48.339 "write_zeroes": true, 00:04:48.339 "zcopy": true, 00:04:48.339 "get_zone_info": false, 00:04:48.339 "zone_management": false, 00:04:48.339 "zone_append": false, 00:04:48.339 "compare": false, 00:04:48.339 "compare_and_write": false, 00:04:48.339 "abort": true, 00:04:48.339 "seek_hole": false, 00:04:48.339 "seek_data": false, 00:04:48.339 "copy": true, 00:04:48.339 "nvme_iov_md": false 00:04:48.339 }, 00:04:48.339 "memory_domains": [ 00:04:48.339 { 00:04:48.339 "dma_device_id": "system", 00:04:48.339 "dma_device_type": 1 00:04:48.339 }, 00:04:48.339 { 00:04:48.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.339 "dma_device_type": 2 00:04:48.339 } 00:04:48.339 ], 00:04:48.339 "driver_specific": { 00:04:48.339 "passthru": { 00:04:48.339 "name": "Passthru0", 00:04:48.339 "base_bdev_name": "Malloc2" 00:04:48.339 } 00:04:48.339 } 00:04:48.339 } 00:04:48.339 ]' 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:48.339 00:04:48.339 real 0m0.314s 00:04:48.339 user 0m0.210s 00:04:48.339 sys 0m0.039s 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.339 ************************************ 00:04:48.339 10:48:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:48.339 END TEST rpc_daemon_integrity 00:04:48.339 ************************************ 00:04:48.339 10:48:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:48.339 10:48:35 rpc -- rpc/rpc.sh@84 -- # killprocess 56647 00:04:48.339 10:48:35 rpc -- common/autotest_common.sh@954 -- # '[' -z 56647 ']' 00:04:48.339 10:48:35 rpc -- common/autotest_common.sh@958 -- # kill -0 56647 00:04:48.339 10:48:35 rpc -- common/autotest_common.sh@959 -- # uname 00:04:48.339 10:48:35 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.339 10:48:35 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56647 00:04:48.597 10:48:35 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.597 10:48:35 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.597 killing process with pid 56647 00:04:48.597 10:48:35 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56647' 00:04:48.597 10:48:35 rpc -- common/autotest_common.sh@973 -- # kill 56647 00:04:48.597 10:48:35 rpc -- common/autotest_common.sh@978 -- # wait 56647 00:04:48.855 00:04:48.855 real 0m2.426s 00:04:48.855 user 0m3.052s 00:04:48.855 sys 0m0.682s 00:04:48.855 10:48:35 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.855 10:48:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.855 ************************************ 00:04:48.855 END TEST rpc 00:04:48.855 ************************************ 00:04:48.855 10:48:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.855 10:48:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.855 10:48:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.855 10:48:35 -- common/autotest_common.sh@10 -- # set +x 00:04:48.855 ************************************ 00:04:48.855 START TEST skip_rpc 00:04:48.855 ************************************ 00:04:48.855 10:48:35 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.855 * Looking for test storage... 00:04:49.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.114 10:48:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.114 --rc genhtml_branch_coverage=1 00:04:49.114 --rc genhtml_function_coverage=1 00:04:49.114 --rc genhtml_legend=1 00:04:49.114 --rc geninfo_all_blocks=1 00:04:49.114 --rc geninfo_unexecuted_blocks=1 00:04:49.114 00:04:49.114 ' 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.114 --rc genhtml_branch_coverage=1 00:04:49.114 --rc genhtml_function_coverage=1 00:04:49.114 --rc genhtml_legend=1 00:04:49.114 --rc geninfo_all_blocks=1 00:04:49.114 --rc geninfo_unexecuted_blocks=1 00:04:49.114 00:04:49.114 ' 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.114 --rc genhtml_branch_coverage=1 00:04:49.114 --rc genhtml_function_coverage=1 00:04:49.114 --rc genhtml_legend=1 00:04:49.114 --rc geninfo_all_blocks=1 00:04:49.114 --rc geninfo_unexecuted_blocks=1 00:04:49.114 00:04:49.114 ' 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.114 --rc genhtml_branch_coverage=1 00:04:49.114 --rc genhtml_function_coverage=1 00:04:49.114 --rc genhtml_legend=1 00:04:49.114 --rc geninfo_all_blocks=1 00:04:49.114 --rc geninfo_unexecuted_blocks=1 00:04:49.114 00:04:49.114 ' 00:04:49.114 10:48:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.114 10:48:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:49.114 10:48:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.114 10:48:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.114 ************************************ 00:04:49.114 START TEST skip_rpc 00:04:49.114 ************************************ 00:04:49.114 10:48:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:49.114 10:48:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56846 00:04:49.114 10:48:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:49.114 10:48:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.114 10:48:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:49.114 [2024-11-15 10:48:35.891429] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:04:49.114 [2024-11-15 10:48:35.891551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56846 ] 00:04:49.372 [2024-11-15 10:48:36.045337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.372 [2024-11-15 10:48:36.116434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.372 [2024-11-15 10:48:36.195465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56846 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56846 ']' 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56846 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56846 00:04:54.646 killing process with pid 56846 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56846' 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56846 00:04:54.646 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56846 00:04:54.646 00:04:54.646 real 0m5.439s 00:04:54.646 user 0m5.051s 00:04:54.646 sys 0m0.305s 00:04:54.646 10:48:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.646 ************************************ 00:04:54.646 END TEST skip_rpc 00:04:54.646 ************************************ 00:04:54.646 10:48:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.646 10:48:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:54.646 10:48:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.646 10:48:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.646 10:48:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.646 ************************************ 00:04:54.646 START TEST skip_rpc_with_json 00:04:54.646 ************************************ 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56928 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56928 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56928 ']' 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.646 10:48:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.646 [2024-11-15 10:48:41.389628] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:04:54.646 [2024-11-15 10:48:41.389755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56928 ] 00:04:54.905 [2024-11-15 10:48:41.534594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.905 [2024-11-15 10:48:41.617060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.905 [2024-11-15 10:48:41.688724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:55.842 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.842 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:55.842 10:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:55.842 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.842 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.842 [2024-11-15 10:48:42.412395] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:55.842 request: 00:04:55.842 { 00:04:55.842 "trtype": "tcp", 00:04:55.842 "method": "nvmf_get_transports", 00:04:55.842 "req_id": 1 00:04:55.842 } 00:04:55.842 Got JSON-RPC error response 00:04:55.842 response: 00:04:55.842 { 00:04:55.842 "code": -19, 00:04:55.842 "message": "No such device" 00:04:55.842 } 00:04:55.842 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:55.843 10:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:55.843 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.843 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.843 [2024-11-15 10:48:42.424516] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:55.843 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.843 10:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:55.843 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.843 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.843 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.843 10:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:55.843 { 00:04:55.843 "subsystems": [ 00:04:55.843 { 00:04:55.843 "subsystem": "fsdev", 00:04:55.843 "config": [ 00:04:55.843 { 00:04:55.843 "method": "fsdev_set_opts", 00:04:55.843 "params": { 00:04:55.843 "fsdev_io_pool_size": 65535, 00:04:55.843 "fsdev_io_cache_size": 256 00:04:55.843 } 00:04:55.843 } 00:04:55.843 ] 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "subsystem": "keyring", 00:04:55.843 "config": [] 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "subsystem": "iobuf", 00:04:55.843 "config": [ 00:04:55.843 { 00:04:55.843 "method": "iobuf_set_options", 00:04:55.843 "params": { 00:04:55.843 "small_pool_count": 8192, 00:04:55.843 "large_pool_count": 1024, 00:04:55.843 "small_bufsize": 8192, 00:04:55.843 "large_bufsize": 135168, 00:04:55.843 "enable_numa": false 00:04:55.843 } 00:04:55.843 } 00:04:55.843 ] 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "subsystem": "sock", 00:04:55.843 "config": [ 00:04:55.843 { 00:04:55.843 "method": "sock_set_default_impl", 00:04:55.843 "params": { 00:04:55.843 "impl_name": "uring" 00:04:55.843 } 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "method": "sock_impl_set_options", 00:04:55.843 "params": { 00:04:55.843 "impl_name": "ssl", 00:04:55.843 "recv_buf_size": 4096, 00:04:55.843 "send_buf_size": 4096, 00:04:55.843 "enable_recv_pipe": true, 00:04:55.843 "enable_quickack": false, 00:04:55.843 "enable_placement_id": 0, 00:04:55.843 "enable_zerocopy_send_server": true, 00:04:55.843 "enable_zerocopy_send_client": false, 00:04:55.843 "zerocopy_threshold": 0, 00:04:55.843 "tls_version": 0, 00:04:55.843 "enable_ktls": false 00:04:55.843 } 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "method": "sock_impl_set_options", 00:04:55.843 "params": { 00:04:55.843 "impl_name": "posix", 00:04:55.843 "recv_buf_size": 2097152, 00:04:55.843 "send_buf_size": 2097152, 00:04:55.843 "enable_recv_pipe": true, 00:04:55.843 "enable_quickack": false, 00:04:55.843 "enable_placement_id": 0, 00:04:55.843 "enable_zerocopy_send_server": true, 00:04:55.843 "enable_zerocopy_send_client": false, 00:04:55.843 "zerocopy_threshold": 0, 00:04:55.843 "tls_version": 0, 00:04:55.843 "enable_ktls": false 00:04:55.843 } 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "method": "sock_impl_set_options", 00:04:55.843 "params": { 00:04:55.843 "impl_name": "uring", 00:04:55.843 "recv_buf_size": 2097152, 00:04:55.843 "send_buf_size": 2097152, 00:04:55.843 "enable_recv_pipe": true, 00:04:55.843 "enable_quickack": false, 00:04:55.843 "enable_placement_id": 0, 00:04:55.843 "enable_zerocopy_send_server": false, 00:04:55.843 "enable_zerocopy_send_client": false, 00:04:55.843 "zerocopy_threshold": 0, 00:04:55.843 "tls_version": 0, 00:04:55.843 "enable_ktls": false 00:04:55.843 } 00:04:55.843 } 00:04:55.843 ] 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "subsystem": "vmd", 00:04:55.843 "config": [] 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "subsystem": "accel", 00:04:55.843 "config": [ 00:04:55.843 { 00:04:55.843 "method": "accel_set_options", 00:04:55.843 "params": { 00:04:55.843 "small_cache_size": 128, 00:04:55.843 "large_cache_size": 16, 00:04:55.843 "task_count": 2048, 00:04:55.843 "sequence_count": 2048, 00:04:55.843 "buf_count": 2048 00:04:55.843 } 00:04:55.843 } 00:04:55.843 ] 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "subsystem": "bdev", 00:04:55.843 "config": [ 00:04:55.843 { 00:04:55.843 "method": "bdev_set_options", 00:04:55.843 "params": { 00:04:55.843 "bdev_io_pool_size": 65535, 00:04:55.843 "bdev_io_cache_size": 256, 00:04:55.843 "bdev_auto_examine": true, 00:04:55.843 "iobuf_small_cache_size": 128, 00:04:55.843 "iobuf_large_cache_size": 16 00:04:55.843 } 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "method": "bdev_raid_set_options", 00:04:55.843 "params": { 00:04:55.843 "process_window_size_kb": 1024, 00:04:55.843 "process_max_bandwidth_mb_sec": 0 00:04:55.843 } 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "method": "bdev_iscsi_set_options", 00:04:55.843 "params": { 00:04:55.843 "timeout_sec": 30 00:04:55.843 } 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "method": "bdev_nvme_set_options", 00:04:55.843 "params": { 00:04:55.843 "action_on_timeout": "none", 00:04:55.843 "timeout_us": 0, 00:04:55.843 "timeout_admin_us": 0, 00:04:55.843 "keep_alive_timeout_ms": 10000, 00:04:55.843 "arbitration_burst": 0, 00:04:55.843 "low_priority_weight": 0, 00:04:55.843 "medium_priority_weight": 0, 00:04:55.843 "high_priority_weight": 0, 00:04:55.843 "nvme_adminq_poll_period_us": 10000, 00:04:55.843 "nvme_ioq_poll_period_us": 0, 00:04:55.843 "io_queue_requests": 0, 00:04:55.843 "delay_cmd_submit": true, 00:04:55.843 "transport_retry_count": 4, 00:04:55.843 "bdev_retry_count": 3, 00:04:55.843 "transport_ack_timeout": 0, 00:04:55.843 "ctrlr_loss_timeout_sec": 0, 00:04:55.843 "reconnect_delay_sec": 0, 00:04:55.843 "fast_io_fail_timeout_sec": 0, 00:04:55.843 "disable_auto_failback": false, 00:04:55.843 "generate_uuids": false, 00:04:55.843 "transport_tos": 0, 00:04:55.843 "nvme_error_stat": false, 00:04:55.843 "rdma_srq_size": 0, 00:04:55.843 "io_path_stat": false, 00:04:55.843 "allow_accel_sequence": false, 00:04:55.843 "rdma_max_cq_size": 0, 00:04:55.843 "rdma_cm_event_timeout_ms": 0, 00:04:55.843 "dhchap_digests": [ 00:04:55.843 "sha256", 00:04:55.843 "sha384", 00:04:55.843 "sha512" 00:04:55.843 ], 00:04:55.843 "dhchap_dhgroups": [ 00:04:55.843 "null", 00:04:55.843 "ffdhe2048", 00:04:55.843 "ffdhe3072", 00:04:55.843 "ffdhe4096", 00:04:55.843 "ffdhe6144", 00:04:55.843 "ffdhe8192" 00:04:55.843 ] 00:04:55.843 } 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "method": "bdev_nvme_set_hotplug", 00:04:55.843 "params": { 00:04:55.843 "period_us": 100000, 00:04:55.843 "enable": false 00:04:55.843 } 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "method": "bdev_wait_for_examine" 00:04:55.843 } 00:04:55.843 ] 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "subsystem": "scsi", 00:04:55.843 "config": null 00:04:55.843 }, 00:04:55.843 { 00:04:55.843 "subsystem": "scheduler", 00:04:55.843 "config": [ 00:04:55.843 { 00:04:55.844 "method": "framework_set_scheduler", 00:04:55.844 "params": { 00:04:55.844 "name": "static" 00:04:55.844 } 00:04:55.844 } 00:04:55.844 ] 00:04:55.844 }, 00:04:55.844 { 00:04:55.844 "subsystem": "vhost_scsi", 00:04:55.844 "config": [] 00:04:55.844 }, 00:04:55.844 { 00:04:55.844 "subsystem": "vhost_blk", 00:04:55.844 "config": [] 00:04:55.844 }, 00:04:55.844 { 00:04:55.844 "subsystem": "ublk", 00:04:55.844 "config": [] 00:04:55.844 }, 00:04:55.844 { 00:04:55.844 "subsystem": "nbd", 00:04:55.844 "config": [] 00:04:55.844 }, 00:04:55.844 { 00:04:55.844 "subsystem": "nvmf", 00:04:55.844 "config": [ 00:04:55.844 { 00:04:55.844 "method": "nvmf_set_config", 00:04:55.844 "params": { 00:04:55.844 "discovery_filter": "match_any", 00:04:55.844 "admin_cmd_passthru": { 00:04:55.844 "identify_ctrlr": false 00:04:55.844 }, 00:04:55.844 "dhchap_digests": [ 00:04:55.844 "sha256", 00:04:55.844 "sha384", 00:04:55.844 "sha512" 00:04:55.844 ], 00:04:55.844 "dhchap_dhgroups": [ 00:04:55.844 "null", 00:04:55.844 "ffdhe2048", 00:04:55.844 "ffdhe3072", 00:04:55.844 "ffdhe4096", 00:04:55.844 "ffdhe6144", 00:04:55.844 "ffdhe8192" 00:04:55.844 ] 00:04:55.844 } 00:04:55.844 }, 00:04:55.844 { 00:04:55.844 "method": "nvmf_set_max_subsystems", 00:04:55.844 "params": { 00:04:55.844 "max_subsystems": 1024 00:04:55.844 } 00:04:55.844 }, 00:04:55.844 { 00:04:55.844 "method": "nvmf_set_crdt", 00:04:55.844 "params": { 00:04:55.844 "crdt1": 0, 00:04:55.844 "crdt2": 0, 00:04:55.844 "crdt3": 0 00:04:55.844 } 00:04:55.844 }, 00:04:55.844 { 00:04:55.844 "method": "nvmf_create_transport", 00:04:55.844 "params": { 00:04:55.844 "trtype": "TCP", 00:04:55.844 "max_queue_depth": 128, 00:04:55.844 "max_io_qpairs_per_ctrlr": 127, 00:04:55.844 "in_capsule_data_size": 4096, 00:04:55.844 "max_io_size": 131072, 00:04:55.844 "io_unit_size": 131072, 00:04:55.844 "max_aq_depth": 128, 00:04:55.844 "num_shared_buffers": 511, 00:04:55.844 "buf_cache_size": 4294967295, 00:04:55.844 "dif_insert_or_strip": false, 00:04:55.844 "zcopy": false, 00:04:55.844 "c2h_success": true, 00:04:55.844 "sock_priority": 0, 00:04:55.844 "abort_timeout_sec": 1, 00:04:55.844 "ack_timeout": 0, 00:04:55.844 "data_wr_pool_size": 0 00:04:55.844 } 00:04:55.844 } 00:04:55.844 ] 00:04:55.844 }, 00:04:55.844 { 00:04:55.844 "subsystem": "iscsi", 00:04:55.844 "config": [ 00:04:55.844 { 00:04:55.844 "method": "iscsi_set_options", 00:04:55.844 "params": { 00:04:55.844 "node_base": "iqn.2016-06.io.spdk", 00:04:55.844 "max_sessions": 128, 00:04:55.844 "max_connections_per_session": 2, 00:04:55.844 "max_queue_depth": 64, 00:04:55.844 "default_time2wait": 2, 00:04:55.844 "default_time2retain": 20, 00:04:55.844 "first_burst_length": 8192, 00:04:55.844 "immediate_data": true, 00:04:55.844 "allow_duplicated_isid": false, 00:04:55.844 "error_recovery_level": 0, 00:04:55.844 "nop_timeout": 60, 00:04:55.844 "nop_in_interval": 30, 00:04:55.844 "disable_chap": false, 00:04:55.844 "require_chap": false, 00:04:55.844 "mutual_chap": false, 00:04:55.844 "chap_group": 0, 00:04:55.844 "max_large_datain_per_connection": 64, 00:04:55.844 "max_r2t_per_connection": 4, 00:04:55.844 "pdu_pool_size": 36864, 00:04:55.844 "immediate_data_pool_size": 16384, 00:04:55.844 "data_out_pool_size": 2048 00:04:55.844 } 00:04:55.844 } 00:04:55.844 ] 00:04:55.844 } 00:04:55.844 ] 00:04:55.844 } 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56928 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56928 ']' 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56928 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56928 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.844 killing process with pid 56928 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56928' 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56928 00:04:55.844 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56928 00:04:56.431 10:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=56960 00:04:56.431 10:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.431 10:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 56960 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56960 ']' 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56960 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56960 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.707 killing process with pid 56960 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56960' 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56960 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56960 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:01.707 00:05:01.707 real 0m7.152s 00:05:01.707 user 0m6.919s 00:05:01.707 sys 0m0.679s 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.707 ************************************ 00:05:01.707 END TEST skip_rpc_with_json 00:05:01.707 ************************************ 00:05:01.707 10:48:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:01.707 10:48:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.707 10:48:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.707 10:48:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.707 ************************************ 00:05:01.707 START TEST skip_rpc_with_delay 00:05:01.707 ************************************ 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:01.707 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.966 [2024-11-15 10:48:48.590189] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:01.966 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:01.966 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.966 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:01.966 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.966 00:05:01.966 real 0m0.090s 00:05:01.966 user 0m0.056s 00:05:01.966 sys 0m0.033s 00:05:01.966 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.966 ************************************ 00:05:01.966 END TEST skip_rpc_with_delay 00:05:01.966 ************************************ 00:05:01.966 10:48:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:01.966 10:48:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:01.966 10:48:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:01.966 10:48:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:01.966 10:48:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.966 10:48:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.966 10:48:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.966 ************************************ 00:05:01.966 START TEST exit_on_failed_rpc_init 00:05:01.966 ************************************ 00:05:01.966 10:48:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:01.966 10:48:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57069 00:05:01.966 10:48:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57069 00:05:01.966 10:48:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57069 ']' 00:05:01.966 10:48:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.966 10:48:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.966 10:48:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.966 10:48:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.966 10:48:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.966 10:48:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.966 [2024-11-15 10:48:48.733645] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:01.966 [2024-11-15 10:48:48.733758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57069 ] 00:05:02.225 [2024-11-15 10:48:48.873236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.225 [2024-11-15 10:48:48.925468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.225 [2024-11-15 10:48:48.990207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:03.172 10:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.172 [2024-11-15 10:48:49.762023] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:03.172 [2024-11-15 10:48:49.762136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57087 ] 00:05:03.172 [2024-11-15 10:48:49.914223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.172 [2024-11-15 10:48:49.966245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.172 [2024-11-15 10:48:49.966334] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:03.172 [2024-11-15 10:48:49.966352] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:03.172 [2024-11-15 10:48:49.966362] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:03.172 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:03.172 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.172 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:03.172 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:03.172 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:03.172 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.172 10:48:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:03.172 10:48:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57069 00:05:03.172 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57069 ']' 00:05:03.172 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57069 00:05:03.432 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:03.432 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.432 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57069 00:05:03.432 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.432 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.432 killing process with pid 57069 00:05:03.432 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57069' 00:05:03.432 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57069 00:05:03.432 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57069 00:05:03.691 00:05:03.691 real 0m1.744s 00:05:03.691 user 0m1.988s 00:05:03.691 sys 0m0.416s 00:05:03.691 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.691 10:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.691 ************************************ 00:05:03.691 END TEST exit_on_failed_rpc_init 00:05:03.691 ************************************ 00:05:03.691 10:48:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:03.691 00:05:03.691 real 0m14.824s 00:05:03.691 user 0m14.199s 00:05:03.691 sys 0m1.640s 00:05:03.691 10:48:50 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.691 10:48:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.691 ************************************ 00:05:03.691 END TEST skip_rpc 00:05:03.691 ************************************ 00:05:03.691 10:48:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:03.691 10:48:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.691 10:48:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.691 10:48:50 -- common/autotest_common.sh@10 -- # set +x 00:05:03.691 ************************************ 00:05:03.691 START TEST rpc_client 00:05:03.691 ************************************ 00:05:03.691 10:48:50 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:03.951 * Looking for test storage... 00:05:03.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:03.951 10:48:50 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.951 10:48:50 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.951 10:48:50 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.951 10:48:50 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:03.951 10:48:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.952 10:48:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:03.952 10:48:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.952 10:48:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:03.952 10:48:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:03.952 10:48:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.952 10:48:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:03.952 10:48:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.952 10:48:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.952 10:48:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.952 10:48:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:03.952 10:48:50 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.952 10:48:50 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.952 --rc genhtml_branch_coverage=1 00:05:03.952 --rc genhtml_function_coverage=1 00:05:03.952 --rc genhtml_legend=1 00:05:03.952 --rc geninfo_all_blocks=1 00:05:03.952 --rc geninfo_unexecuted_blocks=1 00:05:03.952 00:05:03.952 ' 00:05:03.952 10:48:50 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.952 --rc genhtml_branch_coverage=1 00:05:03.952 --rc genhtml_function_coverage=1 00:05:03.952 --rc genhtml_legend=1 00:05:03.952 --rc geninfo_all_blocks=1 00:05:03.952 --rc geninfo_unexecuted_blocks=1 00:05:03.952 00:05:03.952 ' 00:05:03.952 10:48:50 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.952 --rc genhtml_branch_coverage=1 00:05:03.952 --rc genhtml_function_coverage=1 00:05:03.952 --rc genhtml_legend=1 00:05:03.952 --rc geninfo_all_blocks=1 00:05:03.952 --rc geninfo_unexecuted_blocks=1 00:05:03.952 00:05:03.952 ' 00:05:03.952 10:48:50 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.952 --rc genhtml_branch_coverage=1 00:05:03.952 --rc genhtml_function_coverage=1 00:05:03.952 --rc genhtml_legend=1 00:05:03.952 --rc geninfo_all_blocks=1 00:05:03.952 --rc geninfo_unexecuted_blocks=1 00:05:03.952 00:05:03.952 ' 00:05:03.952 10:48:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:03.952 OK 00:05:03.952 10:48:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:03.952 00:05:03.952 real 0m0.213s 00:05:03.952 user 0m0.134s 00:05:03.952 sys 0m0.081s 00:05:03.952 10:48:50 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.952 10:48:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:03.952 ************************************ 00:05:03.952 END TEST rpc_client 00:05:03.952 ************************************ 00:05:03.952 10:48:50 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:03.952 10:48:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.952 10:48:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.952 10:48:50 -- common/autotest_common.sh@10 -- # set +x 00:05:03.952 ************************************ 00:05:03.952 START TEST json_config 00:05:03.952 ************************************ 00:05:03.952 10:48:50 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:04.212 10:48:50 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.212 10:48:50 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.212 10:48:50 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.212 10:48:50 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.212 10:48:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.212 10:48:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.212 10:48:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.212 10:48:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.212 10:48:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.212 10:48:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.212 10:48:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.212 10:48:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.212 10:48:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.212 10:48:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.212 10:48:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.212 10:48:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:04.212 10:48:50 json_config -- scripts/common.sh@345 -- # : 1 00:05:04.212 10:48:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.212 10:48:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.212 10:48:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:04.212 10:48:50 json_config -- scripts/common.sh@353 -- # local d=1 00:05:04.212 10:48:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.212 10:48:50 json_config -- scripts/common.sh@355 -- # echo 1 00:05:04.212 10:48:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.212 10:48:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:04.212 10:48:50 json_config -- scripts/common.sh@353 -- # local d=2 00:05:04.212 10:48:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.212 10:48:50 json_config -- scripts/common.sh@355 -- # echo 2 00:05:04.212 10:48:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.212 10:48:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.212 10:48:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.212 10:48:50 json_config -- scripts/common.sh@368 -- # return 0 00:05:04.212 10:48:50 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.212 10:48:50 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.212 --rc genhtml_branch_coverage=1 00:05:04.212 --rc genhtml_function_coverage=1 00:05:04.212 --rc genhtml_legend=1 00:05:04.212 --rc geninfo_all_blocks=1 00:05:04.212 --rc geninfo_unexecuted_blocks=1 00:05:04.212 00:05:04.212 ' 00:05:04.212 10:48:50 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.212 --rc genhtml_branch_coverage=1 00:05:04.212 --rc genhtml_function_coverage=1 00:05:04.212 --rc genhtml_legend=1 00:05:04.212 --rc geninfo_all_blocks=1 00:05:04.212 --rc geninfo_unexecuted_blocks=1 00:05:04.212 00:05:04.212 ' 00:05:04.212 10:48:50 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.213 --rc genhtml_branch_coverage=1 00:05:04.213 --rc genhtml_function_coverage=1 00:05:04.213 --rc genhtml_legend=1 00:05:04.213 --rc geninfo_all_blocks=1 00:05:04.213 --rc geninfo_unexecuted_blocks=1 00:05:04.213 00:05:04.213 ' 00:05:04.213 10:48:50 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.213 --rc genhtml_branch_coverage=1 00:05:04.213 --rc genhtml_function_coverage=1 00:05:04.213 --rc genhtml_legend=1 00:05:04.213 --rc geninfo_all_blocks=1 00:05:04.213 --rc geninfo_unexecuted_blocks=1 00:05:04.213 00:05:04.213 ' 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:04.213 10:48:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.213 10:48:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.213 10:48:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.213 10:48:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.213 10:48:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.213 10:48:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.213 10:48:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.213 10:48:50 json_config -- paths/export.sh@5 -- # export PATH 00:05:04.213 10:48:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@51 -- # : 0 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.213 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.213 10:48:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:04.213 INFO: JSON configuration test init 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:04.213 10:48:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.213 10:48:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:04.213 10:48:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.213 10:48:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.213 10:48:50 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:04.213 10:48:50 json_config -- json_config/common.sh@9 -- # local app=target 00:05:04.213 10:48:50 json_config -- json_config/common.sh@10 -- # shift 00:05:04.213 10:48:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.213 10:48:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.213 10:48:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.213 10:48:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.213 10:48:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.213 10:48:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57221 00:05:04.213 Waiting for target to run... 00:05:04.213 10:48:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.213 10:48:50 json_config -- json_config/common.sh@25 -- # waitforlisten 57221 /var/tmp/spdk_tgt.sock 00:05:04.213 10:48:50 json_config -- common/autotest_common.sh@835 -- # '[' -z 57221 ']' 00:05:04.213 10:48:50 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.213 10:48:50 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.213 10:48:50 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.213 10:48:50 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.213 10:48:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.213 10:48:50 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:04.213 [2024-11-15 10:48:51.024509] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:04.213 [2024-11-15 10:48:51.024629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57221 ] 00:05:04.783 [2024-11-15 10:48:51.463096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.783 [2024-11-15 10:48:51.497516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.351 10:48:52 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.351 10:48:52 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:05.351 00:05:05.351 10:48:52 json_config -- json_config/common.sh@26 -- # echo '' 00:05:05.351 10:48:52 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:05.351 10:48:52 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:05.351 10:48:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.351 10:48:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.351 10:48:52 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:05.351 10:48:52 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:05.351 10:48:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.351 10:48:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.351 10:48:52 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:05.351 10:48:52 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:05.351 10:48:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:05.609 [2024-11-15 10:48:52.376652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:05.868 10:48:52 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:05.868 10:48:52 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:05.868 10:48:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.868 10:48:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.868 10:48:52 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:05.868 10:48:52 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:05.868 10:48:52 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:05.868 10:48:52 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:05.868 10:48:52 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:05.868 10:48:52 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:05.868 10:48:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:05.868 10:48:52 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@54 -- # sort 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:06.128 10:48:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.128 10:48:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:06.128 10:48:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.128 10:48:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:06.128 10:48:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:06.128 10:48:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:06.387 MallocForNvmf0 00:05:06.387 10:48:53 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:06.387 10:48:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:06.646 MallocForNvmf1 00:05:06.646 10:48:53 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:06.646 10:48:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:06.905 [2024-11-15 10:48:53.652171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.905 10:48:53 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:06.905 10:48:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:07.164 10:48:53 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:07.164 10:48:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:07.424 10:48:54 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:07.424 10:48:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:07.683 10:48:54 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:07.683 10:48:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:07.942 [2024-11-15 10:48:54.552656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:07.942 10:48:54 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:07.942 10:48:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.942 10:48:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.942 10:48:54 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:07.942 10:48:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.942 10:48:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.942 10:48:54 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:07.942 10:48:54 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:07.942 10:48:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:08.201 MallocBdevForConfigChangeCheck 00:05:08.201 10:48:54 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:08.201 10:48:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.201 10:48:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.201 10:48:54 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:08.201 10:48:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.788 INFO: shutting down applications... 00:05:08.788 10:48:55 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:08.788 10:48:55 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:08.788 10:48:55 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:08.788 10:48:55 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:08.788 10:48:55 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:09.060 Calling clear_iscsi_subsystem 00:05:09.060 Calling clear_nvmf_subsystem 00:05:09.060 Calling clear_nbd_subsystem 00:05:09.060 Calling clear_ublk_subsystem 00:05:09.060 Calling clear_vhost_blk_subsystem 00:05:09.060 Calling clear_vhost_scsi_subsystem 00:05:09.060 Calling clear_bdev_subsystem 00:05:09.060 10:48:55 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:09.060 10:48:55 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:09.060 10:48:55 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:09.060 10:48:55 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.060 10:48:55 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:09.060 10:48:55 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:09.319 10:48:56 json_config -- json_config/json_config.sh@352 -- # break 00:05:09.319 10:48:56 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:09.319 10:48:56 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:09.319 10:48:56 json_config -- json_config/common.sh@31 -- # local app=target 00:05:09.319 10:48:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.319 10:48:56 json_config -- json_config/common.sh@35 -- # [[ -n 57221 ]] 00:05:09.319 10:48:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57221 00:05:09.319 10:48:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.319 10:48:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.319 10:48:56 json_config -- json_config/common.sh@41 -- # kill -0 57221 00:05:09.319 10:48:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.887 10:48:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.887 10:48:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.887 10:48:56 json_config -- json_config/common.sh@41 -- # kill -0 57221 00:05:09.887 SPDK target shutdown done 00:05:09.887 INFO: relaunching applications... 00:05:09.887 10:48:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.887 10:48:56 json_config -- json_config/common.sh@43 -- # break 00:05:09.887 10:48:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.887 10:48:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.887 10:48:56 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:09.887 10:48:56 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.887 10:48:56 json_config -- json_config/common.sh@9 -- # local app=target 00:05:09.887 10:48:56 json_config -- json_config/common.sh@10 -- # shift 00:05:09.887 10:48:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.887 10:48:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.887 10:48:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.887 10:48:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.887 10:48:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.887 10:48:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57417 00:05:09.887 10:48:56 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.887 10:48:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.887 Waiting for target to run... 00:05:09.887 10:48:56 json_config -- json_config/common.sh@25 -- # waitforlisten 57417 /var/tmp/spdk_tgt.sock 00:05:09.887 10:48:56 json_config -- common/autotest_common.sh@835 -- # '[' -z 57417 ']' 00:05:09.887 10:48:56 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.887 10:48:56 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.887 10:48:56 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.887 10:48:56 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.887 10:48:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.887 [2024-11-15 10:48:56.631267] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:09.888 [2024-11-15 10:48:56.631528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57417 ] 00:05:10.455 [2024-11-15 10:48:57.056091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.455 [2024-11-15 10:48:57.091116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.455 [2024-11-15 10:48:57.226494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:10.714 [2024-11-15 10:48:57.439157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.714 [2024-11-15 10:48:57.471248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.714 00:05:10.714 INFO: Checking if target configuration is the same... 00:05:10.714 10:48:57 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.714 10:48:57 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:10.714 10:48:57 json_config -- json_config/common.sh@26 -- # echo '' 00:05:10.714 10:48:57 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:10.714 10:48:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:10.714 10:48:57 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.714 10:48:57 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:10.714 10:48:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.714 + '[' 2 -ne 2 ']' 00:05:10.714 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:10.714 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:10.714 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:10.714 +++ basename /dev/fd/62 00:05:10.714 ++ mktemp /tmp/62.XXX 00:05:10.973 + tmp_file_1=/tmp/62.eND 00:05:10.973 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.973 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.973 + tmp_file_2=/tmp/spdk_tgt_config.json.GhJ 00:05:10.973 + ret=0 00:05:10.973 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.232 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.232 + diff -u /tmp/62.eND /tmp/spdk_tgt_config.json.GhJ 00:05:11.232 + echo 'INFO: JSON config files are the same' 00:05:11.232 INFO: JSON config files are the same 00:05:11.232 + rm /tmp/62.eND /tmp/spdk_tgt_config.json.GhJ 00:05:11.232 + exit 0 00:05:11.232 10:48:57 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:11.232 INFO: changing configuration and checking if this can be detected... 00:05:11.232 10:48:57 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:11.232 10:48:57 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.232 10:48:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.491 10:48:58 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.491 10:48:58 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:11.491 10:48:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.491 + '[' 2 -ne 2 ']' 00:05:11.492 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:11.492 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:11.492 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:11.492 +++ basename /dev/fd/62 00:05:11.492 ++ mktemp /tmp/62.XXX 00:05:11.492 + tmp_file_1=/tmp/62.TJI 00:05:11.492 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.492 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.492 + tmp_file_2=/tmp/spdk_tgt_config.json.xR3 00:05:11.492 + ret=0 00:05:11.492 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:12.060 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:12.060 + diff -u /tmp/62.TJI /tmp/spdk_tgt_config.json.xR3 00:05:12.060 + ret=1 00:05:12.060 + echo '=== Start of file: /tmp/62.TJI ===' 00:05:12.060 + cat /tmp/62.TJI 00:05:12.060 + echo '=== End of file: /tmp/62.TJI ===' 00:05:12.060 + echo '' 00:05:12.060 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xR3 ===' 00:05:12.060 + cat /tmp/spdk_tgt_config.json.xR3 00:05:12.060 + echo '=== End of file: /tmp/spdk_tgt_config.json.xR3 ===' 00:05:12.060 + echo '' 00:05:12.060 + rm /tmp/62.TJI /tmp/spdk_tgt_config.json.xR3 00:05:12.060 + exit 1 00:05:12.060 INFO: configuration change detected. 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@324 -- # [[ -n 57417 ]] 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.060 10:48:58 json_config -- json_config/json_config.sh@330 -- # killprocess 57417 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@954 -- # '[' -z 57417 ']' 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@958 -- # kill -0 57417 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@959 -- # uname 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57417 00:05:12.060 killing process with pid 57417 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57417' 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@973 -- # kill 57417 00:05:12.060 10:48:58 json_config -- common/autotest_common.sh@978 -- # wait 57417 00:05:12.320 10:48:59 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:12.320 10:48:59 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:12.320 10:48:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.320 10:48:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.320 10:48:59 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:12.320 INFO: Success 00:05:12.320 10:48:59 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:12.320 ************************************ 00:05:12.320 END TEST json_config 00:05:12.320 ************************************ 00:05:12.320 00:05:12.320 real 0m8.333s 00:05:12.320 user 0m11.867s 00:05:12.320 sys 0m1.682s 00:05:12.320 10:48:59 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.320 10:48:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.320 10:48:59 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.320 10:48:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.320 10:48:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.320 10:48:59 -- common/autotest_common.sh@10 -- # set +x 00:05:12.320 ************************************ 00:05:12.320 START TEST json_config_extra_key 00:05:12.320 ************************************ 00:05:12.320 10:48:59 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.580 10:48:59 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.580 10:48:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.580 10:48:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.580 10:48:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:12.580 10:48:59 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.580 10:48:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.580 --rc genhtml_branch_coverage=1 00:05:12.580 --rc genhtml_function_coverage=1 00:05:12.580 --rc genhtml_legend=1 00:05:12.580 --rc geninfo_all_blocks=1 00:05:12.580 --rc geninfo_unexecuted_blocks=1 00:05:12.580 00:05:12.580 ' 00:05:12.580 10:48:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.580 --rc genhtml_branch_coverage=1 00:05:12.580 --rc genhtml_function_coverage=1 00:05:12.580 --rc genhtml_legend=1 00:05:12.580 --rc geninfo_all_blocks=1 00:05:12.580 --rc geninfo_unexecuted_blocks=1 00:05:12.580 00:05:12.580 ' 00:05:12.580 10:48:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.580 --rc genhtml_branch_coverage=1 00:05:12.580 --rc genhtml_function_coverage=1 00:05:12.580 --rc genhtml_legend=1 00:05:12.580 --rc geninfo_all_blocks=1 00:05:12.580 --rc geninfo_unexecuted_blocks=1 00:05:12.580 00:05:12.580 ' 00:05:12.580 10:48:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.580 --rc genhtml_branch_coverage=1 00:05:12.580 --rc genhtml_function_coverage=1 00:05:12.580 --rc genhtml_legend=1 00:05:12.580 --rc geninfo_all_blocks=1 00:05:12.580 --rc geninfo_unexecuted_blocks=1 00:05:12.580 00:05:12.580 ' 00:05:12.580 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.580 10:48:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.580 10:48:59 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.580 10:48:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.580 10:48:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.581 10:48:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.581 10:48:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:12.581 10:48:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.581 10:48:59 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:12.581 10:48:59 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.581 10:48:59 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.581 10:48:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.581 10:48:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.581 10:48:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.581 10:48:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.581 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.581 10:48:59 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.581 10:48:59 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.581 10:48:59 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:12.581 INFO: launching applications... 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:12.581 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:12.581 10:48:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:12.581 10:48:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:12.581 10:48:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.581 10:48:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.581 10:48:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.581 10:48:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.581 10:48:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.581 10:48:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57565 00:05:12.581 10:48:59 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:12.581 Waiting for target to run... 00:05:12.581 10:48:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.581 10:48:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57565 /var/tmp/spdk_tgt.sock 00:05:12.581 10:48:59 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57565 ']' 00:05:12.581 10:48:59 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.581 10:48:59 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.581 10:48:59 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.581 10:48:59 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.581 10:48:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.581 [2024-11-15 10:48:59.399184] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:12.581 [2024-11-15 10:48:59.399452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57565 ] 00:05:13.148 [2024-11-15 10:48:59.826850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.148 [2024-11-15 10:48:59.861342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.148 [2024-11-15 10:48:59.892255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.717 00:05:13.717 INFO: shutting down applications... 00:05:13.717 10:49:00 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.717 10:49:00 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:13.717 10:49:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:13.717 10:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:13.717 10:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:13.717 10:49:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:13.717 10:49:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.717 10:49:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57565 ]] 00:05:13.717 10:49:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57565 00:05:13.717 10:49:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.717 10:49:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.717 10:49:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57565 00:05:13.717 10:49:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.285 10:49:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.285 10:49:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.285 10:49:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57565 00:05:14.285 10:49:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.285 10:49:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:14.285 SPDK target shutdown done 00:05:14.285 Success 00:05:14.285 10:49:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.285 10:49:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.285 10:49:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:14.285 00:05:14.285 real 0m1.734s 00:05:14.285 user 0m1.611s 00:05:14.285 sys 0m0.432s 00:05:14.285 ************************************ 00:05:14.285 END TEST json_config_extra_key 00:05:14.285 ************************************ 00:05:14.285 10:49:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.285 10:49:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.285 10:49:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.285 10:49:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.285 10:49:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.285 10:49:00 -- common/autotest_common.sh@10 -- # set +x 00:05:14.285 ************************************ 00:05:14.285 START TEST alias_rpc 00:05:14.285 ************************************ 00:05:14.285 10:49:00 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.285 * Looking for test storage... 00:05:14.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:14.285 10:49:01 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.285 10:49:01 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.285 10:49:01 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.285 10:49:01 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.285 10:49:01 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:14.286 10:49:01 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:14.286 10:49:01 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.286 10:49:01 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:14.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.286 10:49:01 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.286 10:49:01 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.286 10:49:01 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.286 10:49:01 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:14.286 10:49:01 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.286 10:49:01 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.286 --rc genhtml_branch_coverage=1 00:05:14.286 --rc genhtml_function_coverage=1 00:05:14.286 --rc genhtml_legend=1 00:05:14.286 --rc geninfo_all_blocks=1 00:05:14.286 --rc geninfo_unexecuted_blocks=1 00:05:14.286 00:05:14.286 ' 00:05:14.286 10:49:01 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.286 --rc genhtml_branch_coverage=1 00:05:14.286 --rc genhtml_function_coverage=1 00:05:14.286 --rc genhtml_legend=1 00:05:14.286 --rc geninfo_all_blocks=1 00:05:14.286 --rc geninfo_unexecuted_blocks=1 00:05:14.286 00:05:14.286 ' 00:05:14.286 10:49:01 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.286 --rc genhtml_branch_coverage=1 00:05:14.286 --rc genhtml_function_coverage=1 00:05:14.286 --rc genhtml_legend=1 00:05:14.286 --rc geninfo_all_blocks=1 00:05:14.286 --rc geninfo_unexecuted_blocks=1 00:05:14.286 00:05:14.286 ' 00:05:14.286 10:49:01 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.286 --rc genhtml_branch_coverage=1 00:05:14.286 --rc genhtml_function_coverage=1 00:05:14.286 --rc genhtml_legend=1 00:05:14.286 --rc geninfo_all_blocks=1 00:05:14.286 --rc geninfo_unexecuted_blocks=1 00:05:14.286 00:05:14.286 ' 00:05:14.286 10:49:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:14.286 10:49:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57643 00:05:14.286 10:49:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57643 00:05:14.286 10:49:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.286 10:49:01 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57643 ']' 00:05:14.286 10:49:01 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.286 10:49:01 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.286 10:49:01 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.286 10:49:01 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.286 10:49:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.545 [2024-11-15 10:49:01.188785] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:14.545 [2024-11-15 10:49:01.189085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57643 ] 00:05:14.545 [2024-11-15 10:49:01.333284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.545 [2024-11-15 10:49:01.379994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.804 [2024-11-15 10:49:01.457421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.804 10:49:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.804 10:49:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.804 10:49:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:15.373 10:49:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57643 00:05:15.373 10:49:01 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57643 ']' 00:05:15.374 10:49:01 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57643 00:05:15.374 10:49:01 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:15.374 10:49:01 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.374 10:49:01 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57643 00:05:15.374 killing process with pid 57643 00:05:15.374 10:49:01 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.374 10:49:01 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.374 10:49:01 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57643' 00:05:15.374 10:49:01 alias_rpc -- common/autotest_common.sh@973 -- # kill 57643 00:05:15.374 10:49:01 alias_rpc -- common/autotest_common.sh@978 -- # wait 57643 00:05:15.633 ************************************ 00:05:15.633 END TEST alias_rpc 00:05:15.633 ************************************ 00:05:15.633 00:05:15.633 real 0m1.417s 00:05:15.633 user 0m1.487s 00:05:15.633 sys 0m0.429s 00:05:15.633 10:49:02 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.633 10:49:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.633 10:49:02 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:15.633 10:49:02 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.633 10:49:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.633 10:49:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.633 10:49:02 -- common/autotest_common.sh@10 -- # set +x 00:05:15.633 ************************************ 00:05:15.633 START TEST spdkcli_tcp 00:05:15.633 ************************************ 00:05:15.633 10:49:02 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:15.633 * Looking for test storage... 00:05:15.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:15.633 10:49:02 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.633 10:49:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.634 10:49:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.893 10:49:02 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:15.893 10:49:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.894 10:49:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:15.894 10:49:02 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.894 10:49:02 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:15.894 10:49:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:15.894 10:49:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.894 10:49:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:15.894 10:49:02 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.894 10:49:02 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.894 10:49:02 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.894 10:49:02 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.894 --rc genhtml_branch_coverage=1 00:05:15.894 --rc genhtml_function_coverage=1 00:05:15.894 --rc genhtml_legend=1 00:05:15.894 --rc geninfo_all_blocks=1 00:05:15.894 --rc geninfo_unexecuted_blocks=1 00:05:15.894 00:05:15.894 ' 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.894 --rc genhtml_branch_coverage=1 00:05:15.894 --rc genhtml_function_coverage=1 00:05:15.894 --rc genhtml_legend=1 00:05:15.894 --rc geninfo_all_blocks=1 00:05:15.894 --rc geninfo_unexecuted_blocks=1 00:05:15.894 00:05:15.894 ' 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.894 --rc genhtml_branch_coverage=1 00:05:15.894 --rc genhtml_function_coverage=1 00:05:15.894 --rc genhtml_legend=1 00:05:15.894 --rc geninfo_all_blocks=1 00:05:15.894 --rc geninfo_unexecuted_blocks=1 00:05:15.894 00:05:15.894 ' 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.894 --rc genhtml_branch_coverage=1 00:05:15.894 --rc genhtml_function_coverage=1 00:05:15.894 --rc genhtml_legend=1 00:05:15.894 --rc geninfo_all_blocks=1 00:05:15.894 --rc geninfo_unexecuted_blocks=1 00:05:15.894 00:05:15.894 ' 00:05:15.894 10:49:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:15.894 10:49:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:15.894 10:49:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:15.894 10:49:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:15.894 10:49:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:15.894 10:49:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:15.894 10:49:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.894 10:49:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57720 00:05:15.894 10:49:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57720 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57720 ']' 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.894 10:49:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.894 10:49:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.894 [2024-11-15 10:49:02.651260] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:15.894 [2024-11-15 10:49:02.651559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57720 ] 00:05:16.154 [2024-11-15 10:49:02.793230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.154 [2024-11-15 10:49:02.845562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.154 [2024-11-15 10:49:02.845586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.154 [2024-11-15 10:49:02.912893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.413 10:49:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.413 10:49:03 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:16.413 10:49:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57729 00:05:16.413 10:49:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:16.413 10:49:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:16.674 [ 00:05:16.674 "bdev_malloc_delete", 00:05:16.674 "bdev_malloc_create", 00:05:16.674 "bdev_null_resize", 00:05:16.674 "bdev_null_delete", 00:05:16.674 "bdev_null_create", 00:05:16.674 "bdev_nvme_cuse_unregister", 00:05:16.674 "bdev_nvme_cuse_register", 00:05:16.674 "bdev_opal_new_user", 00:05:16.674 "bdev_opal_set_lock_state", 00:05:16.674 "bdev_opal_delete", 00:05:16.674 "bdev_opal_get_info", 00:05:16.674 "bdev_opal_create", 00:05:16.674 "bdev_nvme_opal_revert", 00:05:16.674 "bdev_nvme_opal_init", 00:05:16.674 "bdev_nvme_send_cmd", 00:05:16.674 "bdev_nvme_set_keys", 00:05:16.674 "bdev_nvme_get_path_iostat", 00:05:16.674 "bdev_nvme_get_mdns_discovery_info", 00:05:16.674 "bdev_nvme_stop_mdns_discovery", 00:05:16.674 "bdev_nvme_start_mdns_discovery", 00:05:16.674 "bdev_nvme_set_multipath_policy", 00:05:16.674 "bdev_nvme_set_preferred_path", 00:05:16.674 "bdev_nvme_get_io_paths", 00:05:16.674 "bdev_nvme_remove_error_injection", 00:05:16.674 "bdev_nvme_add_error_injection", 00:05:16.674 "bdev_nvme_get_discovery_info", 00:05:16.674 "bdev_nvme_stop_discovery", 00:05:16.674 "bdev_nvme_start_discovery", 00:05:16.674 "bdev_nvme_get_controller_health_info", 00:05:16.674 "bdev_nvme_disable_controller", 00:05:16.674 "bdev_nvme_enable_controller", 00:05:16.674 "bdev_nvme_reset_controller", 00:05:16.674 "bdev_nvme_get_transport_statistics", 00:05:16.674 "bdev_nvme_apply_firmware", 00:05:16.674 "bdev_nvme_detach_controller", 00:05:16.674 "bdev_nvme_get_controllers", 00:05:16.674 "bdev_nvme_attach_controller", 00:05:16.674 "bdev_nvme_set_hotplug", 00:05:16.674 "bdev_nvme_set_options", 00:05:16.674 "bdev_passthru_delete", 00:05:16.674 "bdev_passthru_create", 00:05:16.674 "bdev_lvol_set_parent_bdev", 00:05:16.674 "bdev_lvol_set_parent", 00:05:16.674 "bdev_lvol_check_shallow_copy", 00:05:16.674 "bdev_lvol_start_shallow_copy", 00:05:16.674 "bdev_lvol_grow_lvstore", 00:05:16.674 "bdev_lvol_get_lvols", 00:05:16.674 "bdev_lvol_get_lvstores", 00:05:16.674 "bdev_lvol_delete", 00:05:16.674 "bdev_lvol_set_read_only", 00:05:16.674 "bdev_lvol_resize", 00:05:16.674 "bdev_lvol_decouple_parent", 00:05:16.674 "bdev_lvol_inflate", 00:05:16.674 "bdev_lvol_rename", 00:05:16.674 "bdev_lvol_clone_bdev", 00:05:16.674 "bdev_lvol_clone", 00:05:16.674 "bdev_lvol_snapshot", 00:05:16.674 "bdev_lvol_create", 00:05:16.674 "bdev_lvol_delete_lvstore", 00:05:16.674 "bdev_lvol_rename_lvstore", 00:05:16.674 "bdev_lvol_create_lvstore", 00:05:16.674 "bdev_raid_set_options", 00:05:16.674 "bdev_raid_remove_base_bdev", 00:05:16.674 "bdev_raid_add_base_bdev", 00:05:16.674 "bdev_raid_delete", 00:05:16.674 "bdev_raid_create", 00:05:16.674 "bdev_raid_get_bdevs", 00:05:16.674 "bdev_error_inject_error", 00:05:16.674 "bdev_error_delete", 00:05:16.674 "bdev_error_create", 00:05:16.674 "bdev_split_delete", 00:05:16.674 "bdev_split_create", 00:05:16.674 "bdev_delay_delete", 00:05:16.674 "bdev_delay_create", 00:05:16.674 "bdev_delay_update_latency", 00:05:16.674 "bdev_zone_block_delete", 00:05:16.674 "bdev_zone_block_create", 00:05:16.674 "blobfs_create", 00:05:16.674 "blobfs_detect", 00:05:16.674 "blobfs_set_cache_size", 00:05:16.674 "bdev_aio_delete", 00:05:16.674 "bdev_aio_rescan", 00:05:16.674 "bdev_aio_create", 00:05:16.674 "bdev_ftl_set_property", 00:05:16.674 "bdev_ftl_get_properties", 00:05:16.674 "bdev_ftl_get_stats", 00:05:16.674 "bdev_ftl_unmap", 00:05:16.674 "bdev_ftl_unload", 00:05:16.674 "bdev_ftl_delete", 00:05:16.674 "bdev_ftl_load", 00:05:16.674 "bdev_ftl_create", 00:05:16.674 "bdev_virtio_attach_controller", 00:05:16.674 "bdev_virtio_scsi_get_devices", 00:05:16.674 "bdev_virtio_detach_controller", 00:05:16.674 "bdev_virtio_blk_set_hotplug", 00:05:16.674 "bdev_iscsi_delete", 00:05:16.674 "bdev_iscsi_create", 00:05:16.674 "bdev_iscsi_set_options", 00:05:16.674 "bdev_uring_delete", 00:05:16.674 "bdev_uring_rescan", 00:05:16.674 "bdev_uring_create", 00:05:16.674 "accel_error_inject_error", 00:05:16.674 "ioat_scan_accel_module", 00:05:16.674 "dsa_scan_accel_module", 00:05:16.674 "iaa_scan_accel_module", 00:05:16.674 "keyring_file_remove_key", 00:05:16.674 "keyring_file_add_key", 00:05:16.674 "keyring_linux_set_options", 00:05:16.674 "fsdev_aio_delete", 00:05:16.674 "fsdev_aio_create", 00:05:16.674 "iscsi_get_histogram", 00:05:16.674 "iscsi_enable_histogram", 00:05:16.674 "iscsi_set_options", 00:05:16.674 "iscsi_get_auth_groups", 00:05:16.674 "iscsi_auth_group_remove_secret", 00:05:16.674 "iscsi_auth_group_add_secret", 00:05:16.674 "iscsi_delete_auth_group", 00:05:16.674 "iscsi_create_auth_group", 00:05:16.674 "iscsi_set_discovery_auth", 00:05:16.674 "iscsi_get_options", 00:05:16.674 "iscsi_target_node_request_logout", 00:05:16.674 "iscsi_target_node_set_redirect", 00:05:16.674 "iscsi_target_node_set_auth", 00:05:16.674 "iscsi_target_node_add_lun", 00:05:16.674 "iscsi_get_stats", 00:05:16.674 "iscsi_get_connections", 00:05:16.674 "iscsi_portal_group_set_auth", 00:05:16.674 "iscsi_start_portal_group", 00:05:16.674 "iscsi_delete_portal_group", 00:05:16.674 "iscsi_create_portal_group", 00:05:16.674 "iscsi_get_portal_groups", 00:05:16.674 "iscsi_delete_target_node", 00:05:16.674 "iscsi_target_node_remove_pg_ig_maps", 00:05:16.674 "iscsi_target_node_add_pg_ig_maps", 00:05:16.674 "iscsi_create_target_node", 00:05:16.674 "iscsi_get_target_nodes", 00:05:16.674 "iscsi_delete_initiator_group", 00:05:16.674 "iscsi_initiator_group_remove_initiators", 00:05:16.674 "iscsi_initiator_group_add_initiators", 00:05:16.674 "iscsi_create_initiator_group", 00:05:16.674 "iscsi_get_initiator_groups", 00:05:16.674 "nvmf_set_crdt", 00:05:16.674 "nvmf_set_config", 00:05:16.674 "nvmf_set_max_subsystems", 00:05:16.674 "nvmf_stop_mdns_prr", 00:05:16.674 "nvmf_publish_mdns_prr", 00:05:16.674 "nvmf_subsystem_get_listeners", 00:05:16.674 "nvmf_subsystem_get_qpairs", 00:05:16.674 "nvmf_subsystem_get_controllers", 00:05:16.674 "nvmf_get_stats", 00:05:16.674 "nvmf_get_transports", 00:05:16.674 "nvmf_create_transport", 00:05:16.674 "nvmf_get_targets", 00:05:16.674 "nvmf_delete_target", 00:05:16.674 "nvmf_create_target", 00:05:16.674 "nvmf_subsystem_allow_any_host", 00:05:16.674 "nvmf_subsystem_set_keys", 00:05:16.674 "nvmf_subsystem_remove_host", 00:05:16.674 "nvmf_subsystem_add_host", 00:05:16.674 "nvmf_ns_remove_host", 00:05:16.674 "nvmf_ns_add_host", 00:05:16.674 "nvmf_subsystem_remove_ns", 00:05:16.674 "nvmf_subsystem_set_ns_ana_group", 00:05:16.674 "nvmf_subsystem_add_ns", 00:05:16.674 "nvmf_subsystem_listener_set_ana_state", 00:05:16.675 "nvmf_discovery_get_referrals", 00:05:16.675 "nvmf_discovery_remove_referral", 00:05:16.675 "nvmf_discovery_add_referral", 00:05:16.675 "nvmf_subsystem_remove_listener", 00:05:16.675 "nvmf_subsystem_add_listener", 00:05:16.675 "nvmf_delete_subsystem", 00:05:16.675 "nvmf_create_subsystem", 00:05:16.675 "nvmf_get_subsystems", 00:05:16.675 "env_dpdk_get_mem_stats", 00:05:16.675 "nbd_get_disks", 00:05:16.675 "nbd_stop_disk", 00:05:16.675 "nbd_start_disk", 00:05:16.675 "ublk_recover_disk", 00:05:16.675 "ublk_get_disks", 00:05:16.675 "ublk_stop_disk", 00:05:16.675 "ublk_start_disk", 00:05:16.675 "ublk_destroy_target", 00:05:16.675 "ublk_create_target", 00:05:16.675 "virtio_blk_create_transport", 00:05:16.675 "virtio_blk_get_transports", 00:05:16.675 "vhost_controller_set_coalescing", 00:05:16.675 "vhost_get_controllers", 00:05:16.675 "vhost_delete_controller", 00:05:16.675 "vhost_create_blk_controller", 00:05:16.675 "vhost_scsi_controller_remove_target", 00:05:16.675 "vhost_scsi_controller_add_target", 00:05:16.675 "vhost_start_scsi_controller", 00:05:16.675 "vhost_create_scsi_controller", 00:05:16.675 "thread_set_cpumask", 00:05:16.675 "scheduler_set_options", 00:05:16.675 "framework_get_governor", 00:05:16.675 "framework_get_scheduler", 00:05:16.675 "framework_set_scheduler", 00:05:16.675 "framework_get_reactors", 00:05:16.675 "thread_get_io_channels", 00:05:16.675 "thread_get_pollers", 00:05:16.675 "thread_get_stats", 00:05:16.675 "framework_monitor_context_switch", 00:05:16.675 "spdk_kill_instance", 00:05:16.675 "log_enable_timestamps", 00:05:16.675 "log_get_flags", 00:05:16.675 "log_clear_flag", 00:05:16.675 "log_set_flag", 00:05:16.675 "log_get_level", 00:05:16.675 "log_set_level", 00:05:16.675 "log_get_print_level", 00:05:16.675 "log_set_print_level", 00:05:16.675 "framework_enable_cpumask_locks", 00:05:16.675 "framework_disable_cpumask_locks", 00:05:16.675 "framework_wait_init", 00:05:16.675 "framework_start_init", 00:05:16.675 "scsi_get_devices", 00:05:16.675 "bdev_get_histogram", 00:05:16.675 "bdev_enable_histogram", 00:05:16.675 "bdev_set_qos_limit", 00:05:16.675 "bdev_set_qd_sampling_period", 00:05:16.675 "bdev_get_bdevs", 00:05:16.675 "bdev_reset_iostat", 00:05:16.675 "bdev_get_iostat", 00:05:16.675 "bdev_examine", 00:05:16.675 "bdev_wait_for_examine", 00:05:16.675 "bdev_set_options", 00:05:16.675 "accel_get_stats", 00:05:16.675 "accel_set_options", 00:05:16.675 "accel_set_driver", 00:05:16.675 "accel_crypto_key_destroy", 00:05:16.675 "accel_crypto_keys_get", 00:05:16.675 "accel_crypto_key_create", 00:05:16.675 "accel_assign_opc", 00:05:16.675 "accel_get_module_info", 00:05:16.675 "accel_get_opc_assignments", 00:05:16.675 "vmd_rescan", 00:05:16.675 "vmd_remove_device", 00:05:16.675 "vmd_enable", 00:05:16.675 "sock_get_default_impl", 00:05:16.675 "sock_set_default_impl", 00:05:16.675 "sock_impl_set_options", 00:05:16.675 "sock_impl_get_options", 00:05:16.675 "iobuf_get_stats", 00:05:16.675 "iobuf_set_options", 00:05:16.675 "keyring_get_keys", 00:05:16.675 "framework_get_pci_devices", 00:05:16.675 "framework_get_config", 00:05:16.675 "framework_get_subsystems", 00:05:16.675 "fsdev_set_opts", 00:05:16.675 "fsdev_get_opts", 00:05:16.675 "trace_get_info", 00:05:16.675 "trace_get_tpoint_group_mask", 00:05:16.675 "trace_disable_tpoint_group", 00:05:16.675 "trace_enable_tpoint_group", 00:05:16.675 "trace_clear_tpoint_mask", 00:05:16.675 "trace_set_tpoint_mask", 00:05:16.675 "notify_get_notifications", 00:05:16.675 "notify_get_types", 00:05:16.675 "spdk_get_version", 00:05:16.675 "rpc_get_methods" 00:05:16.675 ] 00:05:16.675 10:49:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.675 10:49:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:16.675 10:49:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57720 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57720 ']' 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57720 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57720 00:05:16.675 killing process with pid 57720 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57720' 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57720 00:05:16.675 10:49:03 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57720 00:05:17.245 ************************************ 00:05:17.245 END TEST spdkcli_tcp 00:05:17.245 ************************************ 00:05:17.245 00:05:17.245 real 0m1.429s 00:05:17.245 user 0m2.439s 00:05:17.245 sys 0m0.463s 00:05:17.245 10:49:03 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.245 10:49:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.245 10:49:03 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.245 10:49:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.245 10:49:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.245 10:49:03 -- common/autotest_common.sh@10 -- # set +x 00:05:17.245 ************************************ 00:05:17.245 START TEST dpdk_mem_utility 00:05:17.245 ************************************ 00:05:17.245 10:49:03 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:17.245 * Looking for test storage... 00:05:17.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:17.245 10:49:03 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.245 10:49:03 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.245 10:49:03 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.245 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.245 10:49:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.245 10:49:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.245 10:49:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.245 10:49:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.245 10:49:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.245 10:49:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.245 10:49:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.246 10:49:04 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:17.246 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.246 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.246 --rc genhtml_branch_coverage=1 00:05:17.246 --rc genhtml_function_coverage=1 00:05:17.246 --rc genhtml_legend=1 00:05:17.246 --rc geninfo_all_blocks=1 00:05:17.246 --rc geninfo_unexecuted_blocks=1 00:05:17.246 00:05:17.246 ' 00:05:17.246 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.246 --rc genhtml_branch_coverage=1 00:05:17.246 --rc genhtml_function_coverage=1 00:05:17.246 --rc genhtml_legend=1 00:05:17.246 --rc geninfo_all_blocks=1 00:05:17.246 --rc geninfo_unexecuted_blocks=1 00:05:17.246 00:05:17.246 ' 00:05:17.246 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.246 --rc genhtml_branch_coverage=1 00:05:17.246 --rc genhtml_function_coverage=1 00:05:17.246 --rc genhtml_legend=1 00:05:17.246 --rc geninfo_all_blocks=1 00:05:17.246 --rc geninfo_unexecuted_blocks=1 00:05:17.246 00:05:17.246 ' 00:05:17.246 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.246 --rc genhtml_branch_coverage=1 00:05:17.246 --rc genhtml_function_coverage=1 00:05:17.246 --rc genhtml_legend=1 00:05:17.246 --rc geninfo_all_blocks=1 00:05:17.246 --rc geninfo_unexecuted_blocks=1 00:05:17.246 00:05:17.246 ' 00:05:17.246 10:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:17.246 10:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57806 00:05:17.246 10:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.246 10:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57806 00:05:17.246 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57806 ']' 00:05:17.246 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.246 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.246 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.246 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.246 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.506 [2024-11-15 10:49:04.157804] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:17.506 [2024-11-15 10:49:04.158792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57806 ] 00:05:17.506 [2024-11-15 10:49:04.314243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.506 [2024-11-15 10:49:04.357832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.766 [2024-11-15 10:49:04.426334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:18.028 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.028 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:18.028 10:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:18.028 10:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:18.028 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.028 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.028 { 00:05:18.028 "filename": "/tmp/spdk_mem_dump.txt" 00:05:18.028 } 00:05:18.028 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.028 10:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:18.028 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:18.028 1 heaps totaling size 810.000000 MiB 00:05:18.028 size: 810.000000 MiB heap id: 0 00:05:18.028 end heaps---------- 00:05:18.028 9 mempools totaling size 595.772034 MiB 00:05:18.028 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:18.028 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:18.028 size: 92.545471 MiB name: bdev_io_57806 00:05:18.028 size: 50.003479 MiB name: msgpool_57806 00:05:18.028 size: 36.509338 MiB name: fsdev_io_57806 00:05:18.028 size: 21.763794 MiB name: PDU_Pool 00:05:18.028 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:18.028 size: 4.133484 MiB name: evtpool_57806 00:05:18.028 size: 0.026123 MiB name: Session_Pool 00:05:18.028 end mempools------- 00:05:18.028 6 memzones totaling size 4.142822 MiB 00:05:18.028 size: 1.000366 MiB name: RG_ring_0_57806 00:05:18.028 size: 1.000366 MiB name: RG_ring_1_57806 00:05:18.028 size: 1.000366 MiB name: RG_ring_4_57806 00:05:18.028 size: 1.000366 MiB name: RG_ring_5_57806 00:05:18.028 size: 0.125366 MiB name: RG_ring_2_57806 00:05:18.028 size: 0.015991 MiB name: RG_ring_3_57806 00:05:18.028 end memzones------- 00:05:18.028 10:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:18.028 heap id: 0 total size: 810.000000 MiB number of busy elements: 312 number of free elements: 15 00:05:18.028 list of free elements. size: 10.813416 MiB 00:05:18.028 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:18.028 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:18.028 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:18.028 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:18.028 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:18.028 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:18.028 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:18.028 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:18.028 element at address: 0x20001a600000 with size: 0.567871 MiB 00:05:18.028 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:18.028 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:18.028 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:18.028 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:18.028 element at address: 0x200027a00000 with size: 0.395752 MiB 00:05:18.028 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:18.028 list of standard malloc elements. size: 199.267700 MiB 00:05:18.028 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:18.028 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:18.028 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:18.028 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:18.028 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:18.028 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:18.028 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:18.028 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:18.028 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:18.028 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:18.028 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:18.028 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:18.029 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691600 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:18.029 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a65500 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:18.029 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:18.030 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:18.030 list of memzone associated elements. size: 599.918884 MiB 00:05:18.030 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:18.030 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:18.030 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:18.030 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:18.030 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:18.030 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57806_0 00:05:18.030 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:18.030 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57806_0 00:05:18.030 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:18.030 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57806_0 00:05:18.030 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:18.030 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:18.030 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:18.030 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:18.030 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:18.030 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57806_0 00:05:18.030 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:18.030 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57806 00:05:18.030 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:18.030 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57806 00:05:18.030 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:18.030 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:18.030 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:18.030 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:18.030 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:18.030 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:18.030 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:18.030 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:18.030 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:18.030 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57806 00:05:18.030 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:18.030 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57806 00:05:18.030 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:18.030 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57806 00:05:18.030 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:18.030 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57806 00:05:18.030 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:18.030 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57806 00:05:18.030 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:18.030 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57806 00:05:18.030 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:18.030 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:18.030 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:18.030 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:18.030 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:18.030 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:18.030 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:18.030 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57806 00:05:18.030 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:18.030 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57806 00:05:18.030 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:18.030 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:18.030 element at address: 0x200027a65680 with size: 0.023743 MiB 00:05:18.030 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:18.030 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:18.030 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57806 00:05:18.030 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:05:18.030 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:18.030 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:18.030 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57806 00:05:18.030 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:18.030 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57806 00:05:18.030 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:18.030 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57806 00:05:18.030 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:05:18.030 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:18.030 10:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:18.030 10:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57806 00:05:18.030 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57806 ']' 00:05:18.030 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57806 00:05:18.030 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:18.030 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.030 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57806 00:05:18.030 killing process with pid 57806 00:05:18.030 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.030 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.030 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57806' 00:05:18.030 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57806 00:05:18.030 10:49:04 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57806 00:05:18.598 ************************************ 00:05:18.598 END TEST dpdk_mem_utility 00:05:18.598 ************************************ 00:05:18.598 00:05:18.598 real 0m1.292s 00:05:18.598 user 0m1.249s 00:05:18.598 sys 0m0.438s 00:05:18.598 10:49:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.598 10:49:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.598 10:49:05 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:18.598 10:49:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.598 10:49:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.598 10:49:05 -- common/autotest_common.sh@10 -- # set +x 00:05:18.598 ************************************ 00:05:18.598 START TEST event 00:05:18.598 ************************************ 00:05:18.598 10:49:05 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:18.598 * Looking for test storage... 00:05:18.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:18.598 10:49:05 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.598 10:49:05 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.598 10:49:05 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.598 10:49:05 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.598 10:49:05 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.598 10:49:05 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.598 10:49:05 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.599 10:49:05 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.599 10:49:05 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.599 10:49:05 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.599 10:49:05 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.599 10:49:05 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.599 10:49:05 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.599 10:49:05 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.599 10:49:05 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.599 10:49:05 event -- scripts/common.sh@344 -- # case "$op" in 00:05:18.599 10:49:05 event -- scripts/common.sh@345 -- # : 1 00:05:18.599 10:49:05 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.599 10:49:05 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.599 10:49:05 event -- scripts/common.sh@365 -- # decimal 1 00:05:18.599 10:49:05 event -- scripts/common.sh@353 -- # local d=1 00:05:18.599 10:49:05 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.599 10:49:05 event -- scripts/common.sh@355 -- # echo 1 00:05:18.599 10:49:05 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.599 10:49:05 event -- scripts/common.sh@366 -- # decimal 2 00:05:18.599 10:49:05 event -- scripts/common.sh@353 -- # local d=2 00:05:18.599 10:49:05 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.599 10:49:05 event -- scripts/common.sh@355 -- # echo 2 00:05:18.599 10:49:05 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.599 10:49:05 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.599 10:49:05 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.599 10:49:05 event -- scripts/common.sh@368 -- # return 0 00:05:18.599 10:49:05 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.599 10:49:05 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.599 --rc genhtml_branch_coverage=1 00:05:18.599 --rc genhtml_function_coverage=1 00:05:18.599 --rc genhtml_legend=1 00:05:18.599 --rc geninfo_all_blocks=1 00:05:18.599 --rc geninfo_unexecuted_blocks=1 00:05:18.599 00:05:18.599 ' 00:05:18.599 10:49:05 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.599 --rc genhtml_branch_coverage=1 00:05:18.599 --rc genhtml_function_coverage=1 00:05:18.599 --rc genhtml_legend=1 00:05:18.599 --rc geninfo_all_blocks=1 00:05:18.599 --rc geninfo_unexecuted_blocks=1 00:05:18.599 00:05:18.599 ' 00:05:18.599 10:49:05 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.599 --rc genhtml_branch_coverage=1 00:05:18.599 --rc genhtml_function_coverage=1 00:05:18.599 --rc genhtml_legend=1 00:05:18.599 --rc geninfo_all_blocks=1 00:05:18.599 --rc geninfo_unexecuted_blocks=1 00:05:18.599 00:05:18.599 ' 00:05:18.599 10:49:05 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.599 --rc genhtml_branch_coverage=1 00:05:18.599 --rc genhtml_function_coverage=1 00:05:18.599 --rc genhtml_legend=1 00:05:18.599 --rc geninfo_all_blocks=1 00:05:18.599 --rc geninfo_unexecuted_blocks=1 00:05:18.599 00:05:18.599 ' 00:05:18.599 10:49:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:18.599 10:49:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:18.599 10:49:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.599 10:49:05 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:18.599 10:49:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.599 10:49:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.599 ************************************ 00:05:18.599 START TEST event_perf 00:05:18.599 ************************************ 00:05:18.599 10:49:05 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.599 Running I/O for 1 seconds...[2024-11-15 10:49:05.445028] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:18.599 [2024-11-15 10:49:05.445248] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57883 ] 00:05:18.858 [2024-11-15 10:49:05.585981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.858 [2024-11-15 10:49:05.628871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.858 [2024-11-15 10:49:05.629023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.858 Running I/O for 1 seconds...[2024-11-15 10:49:05.629134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.858 [2024-11-15 10:49:05.629134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.237 00:05:20.237 lcore 0: 204946 00:05:20.237 lcore 1: 204943 00:05:20.237 lcore 2: 204943 00:05:20.237 lcore 3: 204944 00:05:20.237 done. 00:05:20.237 00:05:20.237 real 0m1.240s 00:05:20.237 user 0m4.081s 00:05:20.237 sys 0m0.039s 00:05:20.237 10:49:06 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.237 10:49:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.237 ************************************ 00:05:20.237 END TEST event_perf 00:05:20.237 ************************************ 00:05:20.237 10:49:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.237 10:49:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:20.237 10:49:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.237 10:49:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.237 ************************************ 00:05:20.237 START TEST event_reactor 00:05:20.237 ************************************ 00:05:20.237 10:49:06 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.237 [2024-11-15 10:49:06.742636] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:20.237 [2024-11-15 10:49:06.742867] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57922 ] 00:05:20.237 [2024-11-15 10:49:06.886744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.237 [2024-11-15 10:49:06.924352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.175 test_start 00:05:21.175 oneshot 00:05:21.175 tick 100 00:05:21.175 tick 100 00:05:21.175 tick 250 00:05:21.175 tick 100 00:05:21.175 tick 100 00:05:21.175 tick 250 00:05:21.175 tick 100 00:05:21.175 tick 500 00:05:21.175 tick 100 00:05:21.175 tick 100 00:05:21.175 tick 250 00:05:21.175 tick 100 00:05:21.175 tick 100 00:05:21.175 test_end 00:05:21.175 ************************************ 00:05:21.175 END TEST event_reactor 00:05:21.175 ************************************ 00:05:21.175 00:05:21.175 real 0m1.241s 00:05:21.175 user 0m1.102s 00:05:21.175 sys 0m0.033s 00:05:21.175 10:49:07 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.175 10:49:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:21.175 10:49:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:21.175 10:49:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:21.175 10:49:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.175 10:49:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.175 ************************************ 00:05:21.175 START TEST event_reactor_perf 00:05:21.175 ************************************ 00:05:21.175 10:49:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:21.175 [2024-11-15 10:49:08.029905] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:21.175 [2024-11-15 10:49:08.030152] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57952 ] 00:05:21.434 [2024-11-15 10:49:08.169498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.434 [2024-11-15 10:49:08.208740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.812 test_start 00:05:22.812 test_end 00:05:22.812 Performance: 465921 events per second 00:05:22.812 00:05:22.812 real 0m1.235s 00:05:22.812 user 0m1.090s 00:05:22.812 sys 0m0.041s 00:05:22.812 10:49:09 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.812 10:49:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.812 ************************************ 00:05:22.812 END TEST event_reactor_perf 00:05:22.812 ************************************ 00:05:22.812 10:49:09 event -- event/event.sh@49 -- # uname -s 00:05:22.812 10:49:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:22.812 10:49:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:22.812 10:49:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.812 10:49:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.812 10:49:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.812 ************************************ 00:05:22.812 START TEST event_scheduler 00:05:22.813 ************************************ 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:22.813 * Looking for test storage... 00:05:22.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:22.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.813 10:49:09 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.813 --rc genhtml_branch_coverage=1 00:05:22.813 --rc genhtml_function_coverage=1 00:05:22.813 --rc genhtml_legend=1 00:05:22.813 --rc geninfo_all_blocks=1 00:05:22.813 --rc geninfo_unexecuted_blocks=1 00:05:22.813 00:05:22.813 ' 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.813 --rc genhtml_branch_coverage=1 00:05:22.813 --rc genhtml_function_coverage=1 00:05:22.813 --rc genhtml_legend=1 00:05:22.813 --rc geninfo_all_blocks=1 00:05:22.813 --rc geninfo_unexecuted_blocks=1 00:05:22.813 00:05:22.813 ' 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.813 --rc genhtml_branch_coverage=1 00:05:22.813 --rc genhtml_function_coverage=1 00:05:22.813 --rc genhtml_legend=1 00:05:22.813 --rc geninfo_all_blocks=1 00:05:22.813 --rc geninfo_unexecuted_blocks=1 00:05:22.813 00:05:22.813 ' 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.813 --rc genhtml_branch_coverage=1 00:05:22.813 --rc genhtml_function_coverage=1 00:05:22.813 --rc genhtml_legend=1 00:05:22.813 --rc geninfo_all_blocks=1 00:05:22.813 --rc geninfo_unexecuted_blocks=1 00:05:22.813 00:05:22.813 ' 00:05:22.813 10:49:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:22.813 10:49:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58021 00:05:22.813 10:49:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.813 10:49:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58021 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58021 ']' 00:05:22.813 10:49:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.813 10:49:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.813 [2024-11-15 10:49:09.523652] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:22.813 [2024-11-15 10:49:09.523953] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58021 ] 00:05:23.073 [2024-11-15 10:49:09.675338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.073 [2024-11-15 10:49:09.732450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.073 [2024-11-15 10:49:09.732576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.073 [2024-11-15 10:49:09.732701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.073 [2024-11-15 10:49:09.732708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.073 10:49:09 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.073 10:49:09 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:23.073 10:49:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:23.073 10:49:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.073 10:49:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.073 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.073 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.073 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.073 POWER: Cannot set governor of lcore 0 to performance 00:05:23.073 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.073 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.073 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.073 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.073 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:23.073 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:23.073 POWER: Unable to set Power Management Environment for lcore 0 00:05:23.073 [2024-11-15 10:49:09.788804] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:23.073 [2024-11-15 10:49:09.788962] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:23.073 [2024-11-15 10:49:09.789107] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:23.073 [2024-11-15 10:49:09.789244] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:23.073 [2024-11-15 10:49:09.789381] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:23.073 [2024-11-15 10:49:09.789545] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:23.073 10:49:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.073 10:49:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:23.073 10:49:09 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.073 10:49:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.073 [2024-11-15 10:49:09.851311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.073 [2024-11-15 10:49:09.887208] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:23.073 10:49:09 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.073 10:49:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:23.073 10:49:09 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.073 10:49:09 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.073 10:49:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.073 ************************************ 00:05:23.073 START TEST scheduler_create_thread 00:05:23.073 ************************************ 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.073 2 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.073 3 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.073 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.332 4 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.332 5 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.332 6 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.332 7 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.332 8 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.332 9 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.332 10 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:23.332 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.333 10:49:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.333 10:49:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.333 10:49:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:23.333 10:49:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.333 10:49:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.711 10:49:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.711 10:49:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:24.711 10:49:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:24.711 10:49:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.711 10:49:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.701 10:49:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.701 00:05:25.701 real 0m2.614s 00:05:25.701 user 0m0.012s 00:05:25.701 sys 0m0.006s 00:05:25.701 10:49:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.701 10:49:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.701 ************************************ 00:05:25.701 END TEST scheduler_create_thread 00:05:25.701 ************************************ 00:05:25.998 10:49:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:25.998 10:49:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58021 00:05:25.998 10:49:12 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58021 ']' 00:05:25.998 10:49:12 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58021 00:05:25.998 10:49:12 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:25.998 10:49:12 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.998 10:49:12 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58021 00:05:25.998 killing process with pid 58021 00:05:25.998 10:49:12 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:25.998 10:49:12 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:25.998 10:49:12 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58021' 00:05:25.998 10:49:12 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58021 00:05:25.998 10:49:12 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58021 00:05:26.257 [2024-11-15 10:49:12.994826] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:26.516 00:05:26.516 real 0m3.883s 00:05:26.516 user 0m5.702s 00:05:26.516 sys 0m0.357s 00:05:26.516 ************************************ 00:05:26.516 END TEST event_scheduler 00:05:26.516 ************************************ 00:05:26.516 10:49:13 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.516 10:49:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 10:49:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:26.516 10:49:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:26.516 10:49:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.516 10:49:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.516 10:49:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 ************************************ 00:05:26.516 START TEST app_repeat 00:05:26.516 ************************************ 00:05:26.516 10:49:13 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:26.516 Process app_repeat pid: 58108 00:05:26.516 spdk_app_start Round 0 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58108 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58108' 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:26.516 10:49:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58108 /var/tmp/spdk-nbd.sock 00:05:26.516 10:49:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58108 ']' 00:05:26.516 10:49:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.516 10:49:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.516 10:49:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.516 10:49:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.516 10:49:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.516 [2024-11-15 10:49:13.275561] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:26.516 [2024-11-15 10:49:13.275837] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58108 ] 00:05:26.776 [2024-11-15 10:49:13.410827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.776 [2024-11-15 10:49:13.474800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.776 [2024-11-15 10:49:13.474817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.776 [2024-11-15 10:49:13.543297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.776 10:49:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.776 10:49:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.776 10:49:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.036 Malloc0 00:05:27.036 10:49:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.604 Malloc1 00:05:27.604 10:49:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.604 /dev/nbd0 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.604 1+0 records in 00:05:27.604 1+0 records out 00:05:27.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218426 s, 18.8 MB/s 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.604 10:49:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.604 10:49:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.172 /dev/nbd1 00:05:28.172 10:49:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.172 10:49:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.172 1+0 records in 00:05:28.172 1+0 records out 00:05:28.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336692 s, 12.2 MB/s 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:28.172 10:49:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:28.172 10:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.172 10:49:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.172 10:49:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.172 10:49:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.172 10:49:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.431 { 00:05:28.431 "nbd_device": "/dev/nbd0", 00:05:28.431 "bdev_name": "Malloc0" 00:05:28.431 }, 00:05:28.431 { 00:05:28.431 "nbd_device": "/dev/nbd1", 00:05:28.431 "bdev_name": "Malloc1" 00:05:28.431 } 00:05:28.431 ]' 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.431 { 00:05:28.431 "nbd_device": "/dev/nbd0", 00:05:28.431 "bdev_name": "Malloc0" 00:05:28.431 }, 00:05:28.431 { 00:05:28.431 "nbd_device": "/dev/nbd1", 00:05:28.431 "bdev_name": "Malloc1" 00:05:28.431 } 00:05:28.431 ]' 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.431 /dev/nbd1' 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.431 /dev/nbd1' 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.431 256+0 records in 00:05:28.431 256+0 records out 00:05:28.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00692215 s, 151 MB/s 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.431 256+0 records in 00:05:28.431 256+0 records out 00:05:28.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227007 s, 46.2 MB/s 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.431 256+0 records in 00:05:28.431 256+0 records out 00:05:28.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236282 s, 44.4 MB/s 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.431 10:49:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.432 10:49:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.432 10:49:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.432 10:49:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.432 10:49:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.432 10:49:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.432 10:49:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.432 10:49:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.432 10:49:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.432 10:49:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.432 10:49:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.432 10:49:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.691 10:49:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.691 10:49:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.691 10:49:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.691 10:49:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.691 10:49:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.691 10:49:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.691 10:49:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.691 10:49:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.691 10:49:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.691 10:49:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.950 10:49:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.950 10:49:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.950 10:49:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.950 10:49:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.950 10:49:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.950 10:49:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.950 10:49:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.950 10:49:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.950 10:49:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.950 10:49:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.950 10:49:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.209 10:49:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.209 10:49:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.467 10:49:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.726 [2024-11-15 10:49:16.551341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.985 [2024-11-15 10:49:16.598987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.985 [2024-11-15 10:49:16.598999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.985 [2024-11-15 10:49:16.674364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.985 [2024-11-15 10:49:16.674499] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.985 [2024-11-15 10:49:16.674514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.519 10:49:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.519 spdk_app_start Round 1 00:05:32.519 10:49:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:32.519 10:49:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58108 /var/tmp/spdk-nbd.sock 00:05:32.519 10:49:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58108 ']' 00:05:32.519 10:49:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.519 10:49:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.519 10:49:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.519 10:49:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.519 10:49:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.777 10:49:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.777 10:49:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:32.777 10:49:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.345 Malloc0 00:05:33.345 10:49:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.345 Malloc1 00:05:33.345 10:49:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.345 10:49:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.604 /dev/nbd0 00:05:33.604 10:49:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.604 10:49:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.604 1+0 records in 00:05:33.604 1+0 records out 00:05:33.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196682 s, 20.8 MB/s 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.604 10:49:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:33.604 10:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.604 10:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.604 10:49:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.864 /dev/nbd1 00:05:33.864 10:49:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.864 10:49:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.864 1+0 records in 00:05:33.864 1+0 records out 00:05:33.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334359 s, 12.3 MB/s 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:33.864 10:49:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:33.864 10:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.864 10:49:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.864 10:49:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.864 10:49:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.864 10:49:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.431 10:49:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.431 { 00:05:34.431 "nbd_device": "/dev/nbd0", 00:05:34.432 "bdev_name": "Malloc0" 00:05:34.432 }, 00:05:34.432 { 00:05:34.432 "nbd_device": "/dev/nbd1", 00:05:34.432 "bdev_name": "Malloc1" 00:05:34.432 } 00:05:34.432 ]' 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.432 { 00:05:34.432 "nbd_device": "/dev/nbd0", 00:05:34.432 "bdev_name": "Malloc0" 00:05:34.432 }, 00:05:34.432 { 00:05:34.432 "nbd_device": "/dev/nbd1", 00:05:34.432 "bdev_name": "Malloc1" 00:05:34.432 } 00:05:34.432 ]' 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.432 /dev/nbd1' 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.432 /dev/nbd1' 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.432 256+0 records in 00:05:34.432 256+0 records out 00:05:34.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00707657 s, 148 MB/s 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.432 256+0 records in 00:05:34.432 256+0 records out 00:05:34.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243867 s, 43.0 MB/s 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.432 256+0 records in 00:05:34.432 256+0 records out 00:05:34.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318496 s, 32.9 MB/s 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.432 10:49:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.691 10:49:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.691 10:49:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.691 10:49:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.691 10:49:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.691 10:49:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.691 10:49:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.691 10:49:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.691 10:49:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.691 10:49:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.691 10:49:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.950 10:49:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.950 10:49:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.950 10:49:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.950 10:49:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.950 10:49:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.950 10:49:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.950 10:49:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.950 10:49:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.950 10:49:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.950 10:49:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.950 10:49:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.210 10:49:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.210 10:49:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.210 10:49:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.210 10:49:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.210 10:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.210 10:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.210 10:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.210 10:49:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.210 10:49:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.210 10:49:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.210 10:49:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.210 10:49:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.210 10:49:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.469 10:49:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.727 [2024-11-15 10:49:22.517695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.727 [2024-11-15 10:49:22.559919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.727 [2024-11-15 10:49:22.559945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.986 [2024-11-15 10:49:22.635773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.986 [2024-11-15 10:49:22.635884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.986 [2024-11-15 10:49:22.635900] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.519 10:49:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.519 spdk_app_start Round 2 00:05:38.519 10:49:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:38.519 10:49:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58108 /var/tmp/spdk-nbd.sock 00:05:38.519 10:49:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58108 ']' 00:05:38.519 10:49:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.519 10:49:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.519 10:49:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.519 10:49:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.519 10:49:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.778 10:49:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.778 10:49:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:38.778 10:49:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.037 Malloc0 00:05:39.037 10:49:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.606 Malloc1 00:05:39.606 10:49:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.606 /dev/nbd0 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.606 1+0 records in 00:05:39.606 1+0 records out 00:05:39.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340787 s, 12.0 MB/s 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.606 10:49:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.606 10:49:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.864 /dev/nbd1 00:05:40.122 10:49:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.122 10:49:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.122 10:49:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:40.122 10:49:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.122 10:49:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.122 10:49:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.122 10:49:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:40.122 10:49:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.122 10:49:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.122 10:49:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.122 10:49:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.122 1+0 records in 00:05:40.122 1+0 records out 00:05:40.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377161 s, 10.9 MB/s 00:05:40.123 10:49:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.123 10:49:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.123 10:49:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.123 10:49:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.123 10:49:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.123 10:49:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.123 10:49:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.123 10:49:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.123 10:49:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.123 10:49:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.381 10:49:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.381 { 00:05:40.381 "nbd_device": "/dev/nbd0", 00:05:40.381 "bdev_name": "Malloc0" 00:05:40.381 }, 00:05:40.381 { 00:05:40.381 "nbd_device": "/dev/nbd1", 00:05:40.381 "bdev_name": "Malloc1" 00:05:40.381 } 00:05:40.381 ]' 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.382 { 00:05:40.382 "nbd_device": "/dev/nbd0", 00:05:40.382 "bdev_name": "Malloc0" 00:05:40.382 }, 00:05:40.382 { 00:05:40.382 "nbd_device": "/dev/nbd1", 00:05:40.382 "bdev_name": "Malloc1" 00:05:40.382 } 00:05:40.382 ]' 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.382 /dev/nbd1' 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.382 /dev/nbd1' 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.382 256+0 records in 00:05:40.382 256+0 records out 00:05:40.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00853037 s, 123 MB/s 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.382 256+0 records in 00:05:40.382 256+0 records out 00:05:40.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217475 s, 48.2 MB/s 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.382 256+0 records in 00:05:40.382 256+0 records out 00:05:40.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023719 s, 44.2 MB/s 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.382 10:49:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.641 10:49:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.641 10:49:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.641 10:49:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.641 10:49:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.641 10:49:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.641 10:49:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.641 10:49:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.641 10:49:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.641 10:49:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.641 10:49:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.900 10:49:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.900 10:49:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.900 10:49:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.900 10:49:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.900 10:49:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.900 10:49:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.900 10:49:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.900 10:49:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.900 10:49:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.900 10:49:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.900 10:49:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.159 10:49:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.159 10:49:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.726 10:49:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.726 [2024-11-15 10:49:28.473049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.726 [2024-11-15 10:49:28.522314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.726 [2024-11-15 10:49:28.522325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.985 [2024-11-15 10:49:28.589238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.985 [2024-11-15 10:49:28.589372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.985 [2024-11-15 10:49:28.589397] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.530 10:49:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58108 /var/tmp/spdk-nbd.sock 00:05:44.530 10:49:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58108 ']' 00:05:44.530 10:49:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.530 10:49:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.530 10:49:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.530 10:49:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.530 10:49:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:44.789 10:49:31 event.app_repeat -- event/event.sh@39 -- # killprocess 58108 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58108 ']' 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58108 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58108 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58108' 00:05:44.789 killing process with pid 58108 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58108 00:05:44.789 10:49:31 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58108 00:05:45.048 spdk_app_start is called in Round 0. 00:05:45.048 Shutdown signal received, stop current app iteration 00:05:45.048 Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 reinitialization... 00:05:45.048 spdk_app_start is called in Round 1. 00:05:45.048 Shutdown signal received, stop current app iteration 00:05:45.048 Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 reinitialization... 00:05:45.048 spdk_app_start is called in Round 2. 00:05:45.048 Shutdown signal received, stop current app iteration 00:05:45.048 Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 reinitialization... 00:05:45.048 spdk_app_start is called in Round 3. 00:05:45.048 Shutdown signal received, stop current app iteration 00:05:45.048 10:49:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:45.048 10:49:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:45.048 00:05:45.048 real 0m18.567s 00:05:45.048 user 0m42.034s 00:05:45.048 sys 0m2.784s 00:05:45.048 10:49:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.048 10:49:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.048 ************************************ 00:05:45.048 END TEST app_repeat 00:05:45.048 ************************************ 00:05:45.048 10:49:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:45.048 10:49:31 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.048 10:49:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.048 10:49:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.048 10:49:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.048 ************************************ 00:05:45.048 START TEST cpu_locks 00:05:45.048 ************************************ 00:05:45.048 10:49:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.307 * Looking for test storage... 00:05:45.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:45.307 10:49:31 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.307 10:49:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.307 10:49:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.307 10:49:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.307 10:49:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:45.307 10:49:32 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.307 10:49:32 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.307 --rc genhtml_branch_coverage=1 00:05:45.307 --rc genhtml_function_coverage=1 00:05:45.307 --rc genhtml_legend=1 00:05:45.307 --rc geninfo_all_blocks=1 00:05:45.307 --rc geninfo_unexecuted_blocks=1 00:05:45.307 00:05:45.307 ' 00:05:45.307 10:49:32 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.307 --rc genhtml_branch_coverage=1 00:05:45.307 --rc genhtml_function_coverage=1 00:05:45.307 --rc genhtml_legend=1 00:05:45.307 --rc geninfo_all_blocks=1 00:05:45.307 --rc geninfo_unexecuted_blocks=1 00:05:45.307 00:05:45.307 ' 00:05:45.307 10:49:32 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.307 --rc genhtml_branch_coverage=1 00:05:45.307 --rc genhtml_function_coverage=1 00:05:45.307 --rc genhtml_legend=1 00:05:45.307 --rc geninfo_all_blocks=1 00:05:45.307 --rc geninfo_unexecuted_blocks=1 00:05:45.307 00:05:45.307 ' 00:05:45.307 10:49:32 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.307 --rc genhtml_branch_coverage=1 00:05:45.307 --rc genhtml_function_coverage=1 00:05:45.307 --rc genhtml_legend=1 00:05:45.307 --rc geninfo_all_blocks=1 00:05:45.307 --rc geninfo_unexecuted_blocks=1 00:05:45.307 00:05:45.307 ' 00:05:45.307 10:49:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:45.307 10:49:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:45.307 10:49:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:45.307 10:49:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:45.307 10:49:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.307 10:49:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.307 10:49:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.307 ************************************ 00:05:45.307 START TEST default_locks 00:05:45.307 ************************************ 00:05:45.307 10:49:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:45.307 10:49:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.308 10:49:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58541 00:05:45.308 10:49:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58541 00:05:45.308 10:49:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58541 ']' 00:05:45.308 10:49:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.308 10:49:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.308 10:49:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.308 10:49:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.308 10:49:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.308 [2024-11-15 10:49:32.138415] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:45.308 [2024-11-15 10:49:32.138519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58541 ] 00:05:45.567 [2024-11-15 10:49:32.285021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.567 [2024-11-15 10:49:32.347897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.826 [2024-11-15 10:49:32.425172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.826 10:49:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.826 10:49:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:45.826 10:49:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58541 00:05:45.826 10:49:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58541 00:05:45.826 10:49:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.395 10:49:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58541 00:05:46.395 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58541 ']' 00:05:46.395 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58541 00:05:46.395 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:46.395 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.395 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58541 00:05:46.395 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.395 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.395 killing process with pid 58541 00:05:46.395 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58541' 00:05:46.395 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58541 00:05:46.395 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58541 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58541 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58541 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58541 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58541 ']' 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.655 ERROR: process (pid: 58541) is no longer running 00:05:46.655 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58541) - No such process 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.655 00:05:46.655 real 0m1.420s 00:05:46.655 user 0m1.392s 00:05:46.655 sys 0m0.532s 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.655 ************************************ 00:05:46.655 END TEST default_locks 00:05:46.655 ************************************ 00:05:46.655 10:49:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.914 10:49:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:46.914 10:49:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.914 10:49:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.914 10:49:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.914 ************************************ 00:05:46.914 START TEST default_locks_via_rpc 00:05:46.914 ************************************ 00:05:46.914 10:49:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:46.914 10:49:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58591 00:05:46.914 10:49:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58591 00:05:46.914 10:49:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58591 ']' 00:05:46.914 10:49:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.914 10:49:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.914 10:49:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.914 10:49:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.914 10:49:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.914 10:49:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.914 [2024-11-15 10:49:33.590147] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:46.914 [2024-11-15 10:49:33.590254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58591 ] 00:05:46.914 [2024-11-15 10:49:33.729663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.173 [2024-11-15 10:49:33.780392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.173 [2024-11-15 10:49:33.850345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58591 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58591 00:05:47.432 10:49:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.692 10:49:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58591 00:05:47.692 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58591 ']' 00:05:47.692 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58591 00:05:47.692 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.692 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.692 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58591 00:05:47.692 killing process with pid 58591 00:05:47.692 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.692 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.692 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58591' 00:05:47.692 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58591 00:05:47.692 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58591 00:05:48.261 ************************************ 00:05:48.261 END TEST default_locks_via_rpc 00:05:48.261 ************************************ 00:05:48.261 00:05:48.261 real 0m1.377s 00:05:48.261 user 0m1.328s 00:05:48.261 sys 0m0.544s 00:05:48.261 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.261 10:49:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.261 10:49:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:48.261 10:49:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.261 10:49:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.261 10:49:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.261 ************************************ 00:05:48.261 START TEST non_locking_app_on_locked_coremask 00:05:48.261 ************************************ 00:05:48.261 10:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:48.261 10:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58629 00:05:48.261 10:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58629 /var/tmp/spdk.sock 00:05:48.261 10:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.261 10:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58629 ']' 00:05:48.261 10:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.261 10:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.261 10:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.261 10:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.261 10:49:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.261 [2024-11-15 10:49:35.023232] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:48.261 [2024-11-15 10:49:35.023342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58629 ] 00:05:48.521 [2024-11-15 10:49:35.171144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.521 [2024-11-15 10:49:35.217367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.521 [2024-11-15 10:49:35.288097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.781 10:49:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.781 10:49:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:48.781 10:49:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58643 00:05:48.781 10:49:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:48.781 10:49:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58643 /var/tmp/spdk2.sock 00:05:48.781 10:49:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58643 ']' 00:05:48.781 10:49:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.781 10:49:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.781 10:49:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.781 10:49:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.781 10:49:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.781 [2024-11-15 10:49:35.567442] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:48.781 [2024-11-15 10:49:35.567567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58643 ] 00:05:49.040 [2024-11-15 10:49:35.721409] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.040 [2024-11-15 10:49:35.721470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.040 [2024-11-15 10:49:35.810825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.299 [2024-11-15 10:49:35.949708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.867 10:49:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.867 10:49:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.867 10:49:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58629 00:05:49.867 10:49:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58629 00:05:49.867 10:49:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.436 10:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58629 00:05:50.436 10:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58629 ']' 00:05:50.436 10:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58629 00:05:50.436 10:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.436 10:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.436 10:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58629 00:05:50.694 killing process with pid 58629 00:05:50.694 10:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.694 10:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.694 10:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58629' 00:05:50.694 10:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58629 00:05:50.694 10:49:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58629 00:05:51.262 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58643 00:05:51.262 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58643 ']' 00:05:51.262 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58643 00:05:51.262 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.262 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.262 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58643 00:05:51.262 killing process with pid 58643 00:05:51.262 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.262 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.262 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58643' 00:05:51.262 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58643 00:05:51.262 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58643 00:05:51.827 ************************************ 00:05:51.827 END TEST non_locking_app_on_locked_coremask 00:05:51.827 ************************************ 00:05:51.827 00:05:51.827 real 0m3.480s 00:05:51.827 user 0m3.703s 00:05:51.827 sys 0m1.058s 00:05:51.827 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.827 10:49:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.827 10:49:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:51.827 10:49:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.827 10:49:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.827 10:49:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.827 ************************************ 00:05:51.827 START TEST locking_app_on_unlocked_coremask 00:05:51.828 ************************************ 00:05:51.828 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:51.828 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58710 00:05:51.828 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:51.828 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58710 /var/tmp/spdk.sock 00:05:51.828 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58710 ']' 00:05:51.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.828 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.828 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.828 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.828 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.828 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.828 [2024-11-15 10:49:38.539120] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:51.828 [2024-11-15 10:49:38.539201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58710 ] 00:05:51.828 [2024-11-15 10:49:38.676572] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.828 [2024-11-15 10:49:38.676613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.086 [2024-11-15 10:49:38.727541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.086 [2024-11-15 10:49:38.796358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.344 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.344 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.344 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58713 00:05:52.344 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58713 /var/tmp/spdk2.sock 00:05:52.344 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.344 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58713 ']' 00:05:52.344 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.345 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.345 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.345 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.345 10:49:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.345 [2024-11-15 10:49:39.051229] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:52.345 [2024-11-15 10:49:39.051328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58713 ] 00:05:52.603 [2024-11-15 10:49:39.211547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.603 [2024-11-15 10:49:39.332165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.876 [2024-11-15 10:49:39.472651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.145 10:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.145 10:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.145 10:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58713 00:05:53.145 10:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58713 00:05:53.145 10:49:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.081 10:49:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58710 00:05:54.081 10:49:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58710 ']' 00:05:54.081 10:49:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58710 00:05:54.081 10:49:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:54.081 10:49:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.081 10:49:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58710 00:05:54.081 killing process with pid 58710 00:05:54.081 10:49:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.081 10:49:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.081 10:49:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58710' 00:05:54.081 10:49:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58710 00:05:54.081 10:49:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58710 00:05:55.018 10:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58713 00:05:55.018 10:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58713 ']' 00:05:55.018 10:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58713 00:05:55.018 10:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.018 10:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.018 10:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58713 00:05:55.018 killing process with pid 58713 00:05:55.018 10:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.018 10:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.018 10:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58713' 00:05:55.018 10:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58713 00:05:55.018 10:49:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58713 00:05:55.277 ************************************ 00:05:55.277 END TEST locking_app_on_unlocked_coremask 00:05:55.277 ************************************ 00:05:55.277 00:05:55.277 real 0m3.537s 00:05:55.277 user 0m3.808s 00:05:55.277 sys 0m1.069s 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.278 10:49:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:55.278 10:49:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.278 10:49:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.278 10:49:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.278 ************************************ 00:05:55.278 START TEST locking_app_on_locked_coremask 00:05:55.278 ************************************ 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58780 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58780 /var/tmp/spdk.sock 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58780 ']' 00:05:55.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.278 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.537 [2024-11-15 10:49:42.135827] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:55.538 [2024-11-15 10:49:42.135935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58780 ] 00:05:55.538 [2024-11-15 10:49:42.271972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.538 [2024-11-15 10:49:42.321120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.538 [2024-11-15 10:49:42.395530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58790 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58790 /var/tmp/spdk2.sock 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58790 /var/tmp/spdk2.sock 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58790 /var/tmp/spdk2.sock 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58790 ']' 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.800 10:49:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.060 [2024-11-15 10:49:42.660657] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:56.060 [2024-11-15 10:49:42.660789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58790 ] 00:05:56.060 [2024-11-15 10:49:42.814519] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58780 has claimed it. 00:05:56.060 [2024-11-15 10:49:42.818670] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.628 ERROR: process (pid: 58790) is no longer running 00:05:56.628 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58790) - No such process 00:05:56.628 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.628 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:56.628 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:56.628 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.628 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.628 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.628 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58780 00:05:56.628 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58780 00:05:56.628 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.196 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58780 00:05:57.196 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58780 ']' 00:05:57.196 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58780 00:05:57.196 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.196 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.196 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58780 00:05:57.196 killing process with pid 58780 00:05:57.196 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.196 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.196 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58780' 00:05:57.196 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58780 00:05:57.196 10:49:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58780 00:05:57.456 00:05:57.456 real 0m2.141s 00:05:57.456 user 0m2.408s 00:05:57.456 sys 0m0.621s 00:05:57.456 10:49:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.456 10:49:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.456 ************************************ 00:05:57.456 END TEST locking_app_on_locked_coremask 00:05:57.456 ************************************ 00:05:57.456 10:49:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:57.456 10:49:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.456 10:49:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.456 10:49:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.456 ************************************ 00:05:57.456 START TEST locking_overlapped_coremask 00:05:57.456 ************************************ 00:05:57.456 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:57.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.456 10:49:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58834 00:05:57.456 10:49:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:57.456 10:49:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58834 /var/tmp/spdk.sock 00:05:57.456 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58834 ']' 00:05:57.456 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.456 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.456 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.456 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.456 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.715 [2024-11-15 10:49:44.331872] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:57.715 [2024-11-15 10:49:44.332136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58834 ] 00:05:57.715 [2024-11-15 10:49:44.468657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.715 [2024-11-15 10:49:44.514750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.715 [2024-11-15 10:49:44.514883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.715 [2024-11-15 10:49:44.514887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.974 [2024-11-15 10:49:44.584752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58850 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58850 /var/tmp/spdk2.sock 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58850 /var/tmp/spdk2.sock 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58850 /var/tmp/spdk2.sock 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 58850 ']' 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.974 10:49:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.233 [2024-11-15 10:49:44.852770] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:58.233 [2024-11-15 10:49:44.853030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58850 ] 00:05:58.233 [2024-11-15 10:49:45.013756] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58834 has claimed it. 00:05:58.233 [2024-11-15 10:49:45.013842] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.800 ERROR: process (pid: 58850) is no longer running 00:05:58.800 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58850) - No such process 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58834 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 58834 ']' 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 58834 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58834 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58834' 00:05:58.800 killing process with pid 58834 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 58834 00:05:58.800 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 58834 00:05:59.369 00:05:59.369 real 0m1.712s 00:05:59.369 user 0m4.726s 00:05:59.369 sys 0m0.390s 00:05:59.369 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.369 10:49:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.369 ************************************ 00:05:59.369 END TEST locking_overlapped_coremask 00:05:59.369 ************************************ 00:05:59.369 10:49:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.369 10:49:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.369 10:49:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.369 10:49:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.369 ************************************ 00:05:59.369 START TEST locking_overlapped_coremask_via_rpc 00:05:59.369 ************************************ 00:05:59.369 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:59.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.369 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58890 00:05:59.369 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58890 /var/tmp/spdk.sock 00:05:59.369 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.369 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58890 ']' 00:05:59.369 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.369 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.369 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.369 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.369 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.369 [2024-11-15 10:49:46.096492] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:59.369 [2024-11-15 10:49:46.096599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58890 ] 00:05:59.628 [2024-11-15 10:49:46.234524] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.628 [2024-11-15 10:49:46.234749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.628 [2024-11-15 10:49:46.280380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.628 [2024-11-15 10:49:46.280512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.628 [2024-11-15 10:49:46.280515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.628 [2024-11-15 10:49:46.349476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.888 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.888 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.888 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.888 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58901 00:05:59.888 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 58901 /var/tmp/spdk2.sock 00:05:59.888 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58901 ']' 00:05:59.888 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.888 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.888 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.888 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.888 10:49:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.888 [2024-11-15 10:49:46.603839] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:59.888 [2024-11-15 10:49:46.603929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58901 ] 00:06:00.147 [2024-11-15 10:49:46.756353] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.147 [2024-11-15 10:49:46.756403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.147 [2024-11-15 10:49:46.866904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.147 [2024-11-15 10:49:46.870682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.147 [2024-11-15 10:49:46.870695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.406 [2024-11-15 10:49:47.044862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.975 [2024-11-15 10:49:47.698649] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58890 has claimed it. 00:06:00.975 request: 00:06:00.975 { 00:06:00.975 "method": "framework_enable_cpumask_locks", 00:06:00.975 "req_id": 1 00:06:00.975 } 00:06:00.975 Got JSON-RPC error response 00:06:00.975 response: 00:06:00.975 { 00:06:00.975 "code": -32603, 00:06:00.975 "message": "Failed to claim CPU core: 2" 00:06:00.975 } 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58890 /var/tmp/spdk.sock 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58890 ']' 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.975 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.234 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.234 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.234 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 58901 /var/tmp/spdk2.sock 00:06:01.234 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58901 ']' 00:06:01.234 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.234 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.234 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.234 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.234 10:49:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.494 10:49:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.494 10:49:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.494 10:49:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.494 10:49:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.494 10:49:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.494 10:49:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.494 00:06:01.494 real 0m2.215s 00:06:01.494 user 0m1.256s 00:06:01.494 sys 0m0.184s 00:06:01.494 10:49:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.494 10:49:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.494 ************************************ 00:06:01.494 END TEST locking_overlapped_coremask_via_rpc 00:06:01.494 ************************************ 00:06:01.494 10:49:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.494 10:49:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58890 ]] 00:06:01.494 10:49:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58890 00:06:01.494 10:49:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58890 ']' 00:06:01.494 10:49:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58890 00:06:01.494 10:49:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.494 10:49:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.494 10:49:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58890 00:06:01.494 killing process with pid 58890 00:06:01.494 10:49:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.494 10:49:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.494 10:49:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58890' 00:06:01.494 10:49:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58890 00:06:01.494 10:49:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58890 00:06:02.062 10:49:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58901 ]] 00:06:02.062 10:49:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58901 00:06:02.062 10:49:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58901 ']' 00:06:02.062 10:49:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58901 00:06:02.062 10:49:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:02.062 10:49:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.062 10:49:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58901 00:06:02.062 killing process with pid 58901 00:06:02.063 10:49:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:02.063 10:49:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:02.063 10:49:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58901' 00:06:02.063 10:49:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 58901 00:06:02.063 10:49:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 58901 00:06:02.630 10:49:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.630 Process with pid 58890 is not found 00:06:02.630 Process with pid 58901 is not found 00:06:02.630 10:49:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:02.630 10:49:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58890 ]] 00:06:02.630 10:49:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58890 00:06:02.630 10:49:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58890 ']' 00:06:02.630 10:49:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58890 00:06:02.630 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58890) - No such process 00:06:02.630 10:49:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58890 is not found' 00:06:02.630 10:49:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 58901 ]] 00:06:02.630 10:49:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 58901 00:06:02.630 10:49:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 58901 ']' 00:06:02.630 10:49:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 58901 00:06:02.630 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (58901) - No such process 00:06:02.631 10:49:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 58901 is not found' 00:06:02.631 10:49:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:02.631 00:06:02.631 real 0m17.557s 00:06:02.631 user 0m31.475s 00:06:02.631 sys 0m5.362s 00:06:02.631 10:49:49 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.631 10:49:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.631 ************************************ 00:06:02.631 END TEST cpu_locks 00:06:02.631 ************************************ 00:06:02.631 00:06:02.631 real 0m44.248s 00:06:02.631 user 1m25.712s 00:06:02.631 sys 0m8.889s 00:06:02.631 10:49:49 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.631 10:49:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.631 ************************************ 00:06:02.631 END TEST event 00:06:02.631 ************************************ 00:06:02.890 10:49:49 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.890 10:49:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.890 10:49:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.890 10:49:49 -- common/autotest_common.sh@10 -- # set +x 00:06:02.890 ************************************ 00:06:02.890 START TEST thread 00:06:02.890 ************************************ 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.890 * Looking for test storage... 00:06:02.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.890 10:49:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.890 10:49:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.890 10:49:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.890 10:49:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.890 10:49:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.890 10:49:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.890 10:49:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.890 10:49:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.890 10:49:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.890 10:49:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.890 10:49:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.890 10:49:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:02.890 10:49:49 thread -- scripts/common.sh@345 -- # : 1 00:06:02.890 10:49:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.890 10:49:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.890 10:49:49 thread -- scripts/common.sh@365 -- # decimal 1 00:06:02.890 10:49:49 thread -- scripts/common.sh@353 -- # local d=1 00:06:02.890 10:49:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.890 10:49:49 thread -- scripts/common.sh@355 -- # echo 1 00:06:02.890 10:49:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.890 10:49:49 thread -- scripts/common.sh@366 -- # decimal 2 00:06:02.890 10:49:49 thread -- scripts/common.sh@353 -- # local d=2 00:06:02.890 10:49:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.890 10:49:49 thread -- scripts/common.sh@355 -- # echo 2 00:06:02.890 10:49:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.890 10:49:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.890 10:49:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.890 10:49:49 thread -- scripts/common.sh@368 -- # return 0 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.890 --rc genhtml_branch_coverage=1 00:06:02.890 --rc genhtml_function_coverage=1 00:06:02.890 --rc genhtml_legend=1 00:06:02.890 --rc geninfo_all_blocks=1 00:06:02.890 --rc geninfo_unexecuted_blocks=1 00:06:02.890 00:06:02.890 ' 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.890 --rc genhtml_branch_coverage=1 00:06:02.890 --rc genhtml_function_coverage=1 00:06:02.890 --rc genhtml_legend=1 00:06:02.890 --rc geninfo_all_blocks=1 00:06:02.890 --rc geninfo_unexecuted_blocks=1 00:06:02.890 00:06:02.890 ' 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.890 --rc genhtml_branch_coverage=1 00:06:02.890 --rc genhtml_function_coverage=1 00:06:02.890 --rc genhtml_legend=1 00:06:02.890 --rc geninfo_all_blocks=1 00:06:02.890 --rc geninfo_unexecuted_blocks=1 00:06:02.890 00:06:02.890 ' 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.890 --rc genhtml_branch_coverage=1 00:06:02.890 --rc genhtml_function_coverage=1 00:06:02.890 --rc genhtml_legend=1 00:06:02.890 --rc geninfo_all_blocks=1 00:06:02.890 --rc geninfo_unexecuted_blocks=1 00:06:02.890 00:06:02.890 ' 00:06:02.890 10:49:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.890 10:49:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.890 ************************************ 00:06:02.890 START TEST thread_poller_perf 00:06:02.890 ************************************ 00:06:02.890 10:49:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.890 [2024-11-15 10:49:49.717407] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:02.890 [2024-11-15 10:49:49.717664] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59037 ] 00:06:03.149 [2024-11-15 10:49:49.869809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.149 [2024-11-15 10:49:49.939095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.149 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.527 [2024-11-15T10:49:51.388Z] ====================================== 00:06:04.527 [2024-11-15T10:49:51.388Z] busy:2215054738 (cyc) 00:06:04.527 [2024-11-15T10:49:51.388Z] total_run_count: 337000 00:06:04.527 [2024-11-15T10:49:51.388Z] tsc_hz: 2200000000 (cyc) 00:06:04.527 [2024-11-15T10:49:51.388Z] ====================================== 00:06:04.527 [2024-11-15T10:49:51.388Z] poller_cost: 6572 (cyc), 2987 (nsec) 00:06:04.527 00:06:04.527 real 0m1.304s 00:06:04.527 user 0m1.157s 00:06:04.527 sys 0m0.040s 00:06:04.527 ************************************ 00:06:04.527 END TEST thread_poller_perf 00:06:04.527 ************************************ 00:06:04.527 10:49:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.527 10:49:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.527 10:49:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.527 10:49:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:04.527 10:49:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.527 10:49:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.527 ************************************ 00:06:04.527 START TEST thread_poller_perf 00:06:04.527 ************************************ 00:06:04.527 10:49:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:04.527 [2024-11-15 10:49:51.073262] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:04.527 [2024-11-15 10:49:51.073351] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59072 ] 00:06:04.527 [2024-11-15 10:49:51.215444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.527 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.527 [2024-11-15 10:49:51.266872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.468 [2024-11-15T10:49:52.329Z] ====================================== 00:06:05.468 [2024-11-15T10:49:52.329Z] busy:2201983304 (cyc) 00:06:05.468 [2024-11-15T10:49:52.329Z] total_run_count: 4629000 00:06:05.468 [2024-11-15T10:49:52.329Z] tsc_hz: 2200000000 (cyc) 00:06:05.468 [2024-11-15T10:49:52.329Z] ====================================== 00:06:05.468 [2024-11-15T10:49:52.329Z] poller_cost: 475 (cyc), 215 (nsec) 00:06:05.468 00:06:05.468 real 0m1.258s 00:06:05.468 user 0m1.111s 00:06:05.468 sys 0m0.040s 00:06:05.468 10:49:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.468 ************************************ 00:06:05.468 END TEST thread_poller_perf 00:06:05.468 ************************************ 00:06:05.468 10:49:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.727 10:49:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.727 00:06:05.727 real 0m2.845s 00:06:05.727 user 0m2.420s 00:06:05.727 sys 0m0.205s 00:06:05.727 10:49:52 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.727 ************************************ 00:06:05.727 END TEST thread 00:06:05.727 ************************************ 00:06:05.727 10:49:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.727 10:49:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:05.727 10:49:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:05.727 10:49:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.727 10:49:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.727 10:49:52 -- common/autotest_common.sh@10 -- # set +x 00:06:05.727 ************************************ 00:06:05.727 START TEST app_cmdline 00:06:05.727 ************************************ 00:06:05.727 10:49:52 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:05.727 * Looking for test storage... 00:06:05.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:05.727 10:49:52 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.727 10:49:52 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.727 10:49:52 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.727 10:49:52 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:05.727 10:49:52 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:05.728 10:49:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.728 10:49:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.728 10:49:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:05.728 10:49:52 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:05.728 10:49:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.728 10:49:52 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:05.728 10:49:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.728 10:49:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:05.987 10:49:52 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:05.987 10:49:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.987 10:49:52 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:05.987 10:49:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.987 10:49:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.987 10:49:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.987 10:49:52 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:05.987 10:49:52 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.987 10:49:52 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.987 --rc genhtml_branch_coverage=1 00:06:05.987 --rc genhtml_function_coverage=1 00:06:05.987 --rc genhtml_legend=1 00:06:05.987 --rc geninfo_all_blocks=1 00:06:05.987 --rc geninfo_unexecuted_blocks=1 00:06:05.987 00:06:05.987 ' 00:06:05.987 10:49:52 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.987 --rc genhtml_branch_coverage=1 00:06:05.987 --rc genhtml_function_coverage=1 00:06:05.987 --rc genhtml_legend=1 00:06:05.987 --rc geninfo_all_blocks=1 00:06:05.987 --rc geninfo_unexecuted_blocks=1 00:06:05.987 00:06:05.987 ' 00:06:05.987 10:49:52 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.987 --rc genhtml_branch_coverage=1 00:06:05.987 --rc genhtml_function_coverage=1 00:06:05.987 --rc genhtml_legend=1 00:06:05.987 --rc geninfo_all_blocks=1 00:06:05.987 --rc geninfo_unexecuted_blocks=1 00:06:05.987 00:06:05.987 ' 00:06:05.987 10:49:52 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.987 --rc genhtml_branch_coverage=1 00:06:05.987 --rc genhtml_function_coverage=1 00:06:05.987 --rc genhtml_legend=1 00:06:05.987 --rc geninfo_all_blocks=1 00:06:05.987 --rc geninfo_unexecuted_blocks=1 00:06:05.987 00:06:05.987 ' 00:06:05.987 10:49:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:05.987 10:49:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59155 00:06:05.987 10:49:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59155 00:06:05.987 10:49:52 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59155 ']' 00:06:05.987 10:49:52 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:05.987 10:49:52 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.987 10:49:52 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.987 10:49:52 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.987 10:49:52 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.987 10:49:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.987 [2024-11-15 10:49:52.655900] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:05.987 [2024-11-15 10:49:52.656017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59155 ] 00:06:05.987 [2024-11-15 10:49:52.803651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.246 [2024-11-15 10:49:52.860977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.246 [2024-11-15 10:49:52.928943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.505 10:49:53 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.505 10:49:53 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:06.505 10:49:53 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:06.763 { 00:06:06.763 "version": "SPDK v25.01-pre git sha1 f1a181ac3", 00:06:06.763 "fields": { 00:06:06.763 "major": 25, 00:06:06.763 "minor": 1, 00:06:06.763 "patch": 0, 00:06:06.763 "suffix": "-pre", 00:06:06.763 "commit": "f1a181ac3" 00:06:06.763 } 00:06:06.763 } 00:06:06.763 10:49:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:06.764 10:49:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:06.764 10:49:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:06.764 10:49:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:06.764 10:49:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:06.764 10:49:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.764 10:49:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.764 10:49:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:06.764 10:49:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:06.764 10:49:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:06.764 10:49:53 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.023 request: 00:06:07.023 { 00:06:07.023 "method": "env_dpdk_get_mem_stats", 00:06:07.023 "req_id": 1 00:06:07.023 } 00:06:07.023 Got JSON-RPC error response 00:06:07.023 response: 00:06:07.023 { 00:06:07.023 "code": -32601, 00:06:07.023 "message": "Method not found" 00:06:07.023 } 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.023 10:49:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59155 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59155 ']' 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59155 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59155 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.023 killing process with pid 59155 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59155' 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 59155 00:06:07.023 10:49:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 59155 00:06:07.592 00:06:07.592 real 0m1.770s 00:06:07.592 user 0m2.198s 00:06:07.592 sys 0m0.453s 00:06:07.592 10:49:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.592 10:49:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.592 ************************************ 00:06:07.592 END TEST app_cmdline 00:06:07.592 ************************************ 00:06:07.592 10:49:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.592 10:49:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.592 10:49:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.592 10:49:54 -- common/autotest_common.sh@10 -- # set +x 00:06:07.592 ************************************ 00:06:07.592 START TEST version 00:06:07.592 ************************************ 00:06:07.592 10:49:54 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.592 * Looking for test storage... 00:06:07.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:07.592 10:49:54 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.592 10:49:54 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.592 10:49:54 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.592 10:49:54 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.592 10:49:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.592 10:49:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.592 10:49:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.592 10:49:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.592 10:49:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.592 10:49:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.592 10:49:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.592 10:49:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.592 10:49:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.592 10:49:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.592 10:49:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.592 10:49:54 version -- scripts/common.sh@344 -- # case "$op" in 00:06:07.592 10:49:54 version -- scripts/common.sh@345 -- # : 1 00:06:07.592 10:49:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.592 10:49:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.592 10:49:54 version -- scripts/common.sh@365 -- # decimal 1 00:06:07.592 10:49:54 version -- scripts/common.sh@353 -- # local d=1 00:06:07.592 10:49:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.592 10:49:54 version -- scripts/common.sh@355 -- # echo 1 00:06:07.592 10:49:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.592 10:49:54 version -- scripts/common.sh@366 -- # decimal 2 00:06:07.592 10:49:54 version -- scripts/common.sh@353 -- # local d=2 00:06:07.592 10:49:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.592 10:49:54 version -- scripts/common.sh@355 -- # echo 2 00:06:07.592 10:49:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.592 10:49:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.592 10:49:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.592 10:49:54 version -- scripts/common.sh@368 -- # return 0 00:06:07.592 10:49:54 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.592 10:49:54 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.592 --rc genhtml_branch_coverage=1 00:06:07.592 --rc genhtml_function_coverage=1 00:06:07.592 --rc genhtml_legend=1 00:06:07.592 --rc geninfo_all_blocks=1 00:06:07.592 --rc geninfo_unexecuted_blocks=1 00:06:07.592 00:06:07.592 ' 00:06:07.592 10:49:54 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.592 --rc genhtml_branch_coverage=1 00:06:07.592 --rc genhtml_function_coverage=1 00:06:07.592 --rc genhtml_legend=1 00:06:07.592 --rc geninfo_all_blocks=1 00:06:07.592 --rc geninfo_unexecuted_blocks=1 00:06:07.592 00:06:07.592 ' 00:06:07.592 10:49:54 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.592 --rc genhtml_branch_coverage=1 00:06:07.592 --rc genhtml_function_coverage=1 00:06:07.592 --rc genhtml_legend=1 00:06:07.592 --rc geninfo_all_blocks=1 00:06:07.592 --rc geninfo_unexecuted_blocks=1 00:06:07.592 00:06:07.592 ' 00:06:07.592 10:49:54 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.592 --rc genhtml_branch_coverage=1 00:06:07.592 --rc genhtml_function_coverage=1 00:06:07.592 --rc genhtml_legend=1 00:06:07.592 --rc geninfo_all_blocks=1 00:06:07.592 --rc geninfo_unexecuted_blocks=1 00:06:07.592 00:06:07.592 ' 00:06:07.592 10:49:54 version -- app/version.sh@17 -- # get_header_version major 00:06:07.592 10:49:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.592 10:49:54 version -- app/version.sh@14 -- # cut -f2 00:06:07.592 10:49:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.592 10:49:54 version -- app/version.sh@17 -- # major=25 00:06:07.592 10:49:54 version -- app/version.sh@18 -- # get_header_version minor 00:06:07.592 10:49:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.592 10:49:54 version -- app/version.sh@14 -- # cut -f2 00:06:07.592 10:49:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.592 10:49:54 version -- app/version.sh@18 -- # minor=1 00:06:07.592 10:49:54 version -- app/version.sh@19 -- # get_header_version patch 00:06:07.592 10:49:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.592 10:49:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.592 10:49:54 version -- app/version.sh@14 -- # cut -f2 00:06:07.592 10:49:54 version -- app/version.sh@19 -- # patch=0 00:06:07.592 10:49:54 version -- app/version.sh@20 -- # get_header_version suffix 00:06:07.592 10:49:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.592 10:49:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.592 10:49:54 version -- app/version.sh@14 -- # cut -f2 00:06:07.592 10:49:54 version -- app/version.sh@20 -- # suffix=-pre 00:06:07.593 10:49:54 version -- app/version.sh@22 -- # version=25.1 00:06:07.593 10:49:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:07.593 10:49:54 version -- app/version.sh@28 -- # version=25.1rc0 00:06:07.593 10:49:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:07.852 10:49:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:07.852 10:49:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:07.852 10:49:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:07.852 00:06:07.852 real 0m0.259s 00:06:07.852 user 0m0.178s 00:06:07.852 sys 0m0.121s 00:06:07.852 10:49:54 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.852 ************************************ 00:06:07.852 END TEST version 00:06:07.852 ************************************ 00:06:07.852 10:49:54 version -- common/autotest_common.sh@10 -- # set +x 00:06:07.852 10:49:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:07.852 10:49:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:07.852 10:49:54 -- spdk/autotest.sh@194 -- # uname -s 00:06:07.852 10:49:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:07.852 10:49:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:07.852 10:49:54 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:07.852 10:49:54 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:07.852 10:49:54 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:07.852 10:49:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.852 10:49:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.852 10:49:54 -- common/autotest_common.sh@10 -- # set +x 00:06:07.852 ************************************ 00:06:07.852 START TEST spdk_dd 00:06:07.852 ************************************ 00:06:07.852 10:49:54 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:07.852 * Looking for test storage... 00:06:07.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:07.852 10:49:54 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.852 10:49:54 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.852 10:49:54 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.111 10:49:54 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:08.111 10:49:54 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.111 10:49:54 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.111 --rc genhtml_branch_coverage=1 00:06:08.111 --rc genhtml_function_coverage=1 00:06:08.111 --rc genhtml_legend=1 00:06:08.111 --rc geninfo_all_blocks=1 00:06:08.111 --rc geninfo_unexecuted_blocks=1 00:06:08.111 00:06:08.111 ' 00:06:08.111 10:49:54 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.111 --rc genhtml_branch_coverage=1 00:06:08.111 --rc genhtml_function_coverage=1 00:06:08.111 --rc genhtml_legend=1 00:06:08.111 --rc geninfo_all_blocks=1 00:06:08.111 --rc geninfo_unexecuted_blocks=1 00:06:08.111 00:06:08.111 ' 00:06:08.111 10:49:54 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.111 --rc genhtml_branch_coverage=1 00:06:08.111 --rc genhtml_function_coverage=1 00:06:08.111 --rc genhtml_legend=1 00:06:08.111 --rc geninfo_all_blocks=1 00:06:08.111 --rc geninfo_unexecuted_blocks=1 00:06:08.111 00:06:08.111 ' 00:06:08.111 10:49:54 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.111 --rc genhtml_branch_coverage=1 00:06:08.111 --rc genhtml_function_coverage=1 00:06:08.111 --rc genhtml_legend=1 00:06:08.111 --rc geninfo_all_blocks=1 00:06:08.111 --rc geninfo_unexecuted_blocks=1 00:06:08.111 00:06:08.111 ' 00:06:08.111 10:49:54 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.111 10:49:54 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.111 10:49:54 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.111 10:49:54 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.111 10:49:54 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.111 10:49:54 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:08.112 10:49:54 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.112 10:49:54 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:08.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:08.371 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:08.371 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:08.371 10:49:55 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:08.371 10:49:55 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:08.372 10:49:55 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:08.372 10:49:55 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:08.372 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:08.373 * spdk_dd linked to liburing 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:08.373 10:49:55 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:08.373 10:49:55 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:08.634 10:49:55 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:08.635 10:49:55 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:08.635 10:49:55 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:08.635 10:49:55 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:08.635 10:49:55 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:08.635 10:49:55 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:08.635 10:49:55 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:08.635 10:49:55 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:08.635 10:49:55 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:08.635 10:49:55 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.635 10:49:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:08.635 ************************************ 00:06:08.635 START TEST spdk_dd_basic_rw 00:06:08.635 ************************************ 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:08.635 * Looking for test storage... 00:06:08.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.635 --rc genhtml_branch_coverage=1 00:06:08.635 --rc genhtml_function_coverage=1 00:06:08.635 --rc genhtml_legend=1 00:06:08.635 --rc geninfo_all_blocks=1 00:06:08.635 --rc geninfo_unexecuted_blocks=1 00:06:08.635 00:06:08.635 ' 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.635 --rc genhtml_branch_coverage=1 00:06:08.635 --rc genhtml_function_coverage=1 00:06:08.635 --rc genhtml_legend=1 00:06:08.635 --rc geninfo_all_blocks=1 00:06:08.635 --rc geninfo_unexecuted_blocks=1 00:06:08.635 00:06:08.635 ' 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.635 --rc genhtml_branch_coverage=1 00:06:08.635 --rc genhtml_function_coverage=1 00:06:08.635 --rc genhtml_legend=1 00:06:08.635 --rc geninfo_all_blocks=1 00:06:08.635 --rc geninfo_unexecuted_blocks=1 00:06:08.635 00:06:08.635 ' 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.635 --rc genhtml_branch_coverage=1 00:06:08.635 --rc genhtml_function_coverage=1 00:06:08.635 --rc genhtml_legend=1 00:06:08.635 --rc geninfo_all_blocks=1 00:06:08.635 --rc geninfo_unexecuted_blocks=1 00:06:08.635 00:06:08.635 ' 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.635 10:49:55 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:08.636 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:08.898 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:08.898 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.899 ************************************ 00:06:08.899 START TEST dd_bs_lt_native_bs 00:06:08.899 ************************************ 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:08.899 10:49:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:08.899 { 00:06:08.899 "subsystems": [ 00:06:08.899 { 00:06:08.899 "subsystem": "bdev", 00:06:08.899 "config": [ 00:06:08.899 { 00:06:08.899 "params": { 00:06:08.899 "trtype": "pcie", 00:06:08.899 "traddr": "0000:00:10.0", 00:06:08.899 "name": "Nvme0" 00:06:08.899 }, 00:06:08.899 "method": "bdev_nvme_attach_controller" 00:06:08.899 }, 00:06:08.899 { 00:06:08.899 "method": "bdev_wait_for_examine" 00:06:08.899 } 00:06:08.899 ] 00:06:08.899 } 00:06:08.899 ] 00:06:08.899 } 00:06:08.899 [2024-11-15 10:49:55.731676] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:08.899 [2024-11-15 10:49:55.731807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:06:09.159 [2024-11-15 10:49:55.880382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.159 [2024-11-15 10:49:55.934131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.159 [2024-11-15 10:49:56.003805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.418 [2024-11-15 10:49:56.118370] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:09.418 [2024-11-15 10:49:56.118446] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.678 [2024-11-15 10:49:56.287214] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.678 00:06:09.678 real 0m0.702s 00:06:09.678 user 0m0.466s 00:06:09.678 sys 0m0.190s 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:09.678 ************************************ 00:06:09.678 END TEST dd_bs_lt_native_bs 00:06:09.678 ************************************ 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.678 ************************************ 00:06:09.678 START TEST dd_rw 00:06:09.678 ************************************ 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:09.678 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.246 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:10.246 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:10.246 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.246 10:49:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.246 [2024-11-15 10:49:57.052509] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:10.246 [2024-11-15 10:49:57.052625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59530 ] 00:06:10.246 { 00:06:10.246 "subsystems": [ 00:06:10.246 { 00:06:10.246 "subsystem": "bdev", 00:06:10.246 "config": [ 00:06:10.246 { 00:06:10.246 "params": { 00:06:10.246 "trtype": "pcie", 00:06:10.246 "traddr": "0000:00:10.0", 00:06:10.246 "name": "Nvme0" 00:06:10.246 }, 00:06:10.246 "method": "bdev_nvme_attach_controller" 00:06:10.246 }, 00:06:10.246 { 00:06:10.246 "method": "bdev_wait_for_examine" 00:06:10.246 } 00:06:10.246 ] 00:06:10.246 } 00:06:10.246 ] 00:06:10.246 } 00:06:10.505 [2024-11-15 10:49:57.203936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.505 [2024-11-15 10:49:57.272755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.505 [2024-11-15 10:49:57.345107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.764  [2024-11-15T10:49:57.885Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:11.024 00:06:11.024 10:49:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:11.024 10:49:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:11.024 10:49:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.024 10:49:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.024 [2024-11-15 10:49:57.749450] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:11.024 [2024-11-15 10:49:57.749566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59549 ] 00:06:11.024 { 00:06:11.024 "subsystems": [ 00:06:11.024 { 00:06:11.024 "subsystem": "bdev", 00:06:11.024 "config": [ 00:06:11.024 { 00:06:11.024 "params": { 00:06:11.024 "trtype": "pcie", 00:06:11.024 "traddr": "0000:00:10.0", 00:06:11.024 "name": "Nvme0" 00:06:11.024 }, 00:06:11.024 "method": "bdev_nvme_attach_controller" 00:06:11.024 }, 00:06:11.024 { 00:06:11.024 "method": "bdev_wait_for_examine" 00:06:11.024 } 00:06:11.024 ] 00:06:11.024 } 00:06:11.024 ] 00:06:11.024 } 00:06:11.283 [2024-11-15 10:49:57.895098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.283 [2024-11-15 10:49:57.971027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.283 [2024-11-15 10:49:58.040904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.542  [2024-11-15T10:49:58.663Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:11.802 00:06:11.802 10:49:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.802 10:49:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:11.802 10:49:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:11.802 10:49:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:11.802 10:49:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:11.802 10:49:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:11.802 10:49:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:11.802 10:49:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:11.802 10:49:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:11.802 10:49:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.802 10:49:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.802 { 00:06:11.802 "subsystems": [ 00:06:11.802 { 00:06:11.802 "subsystem": "bdev", 00:06:11.802 "config": [ 00:06:11.802 { 00:06:11.802 "params": { 00:06:11.802 "trtype": "pcie", 00:06:11.802 "traddr": "0000:00:10.0", 00:06:11.802 "name": "Nvme0" 00:06:11.802 }, 00:06:11.802 "method": "bdev_nvme_attach_controller" 00:06:11.802 }, 00:06:11.802 { 00:06:11.802 "method": "bdev_wait_for_examine" 00:06:11.802 } 00:06:11.802 ] 00:06:11.802 } 00:06:11.802 ] 00:06:11.802 } 00:06:11.802 [2024-11-15 10:49:58.476188] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:11.802 [2024-11-15 10:49:58.476814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59564 ] 00:06:11.802 [2024-11-15 10:49:58.622491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.061 [2024-11-15 10:49:58.670557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.061 [2024-11-15 10:49:58.746495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.061  [2024-11-15T10:49:59.181Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:12.320 00:06:12.320 10:49:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:12.320 10:49:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:12.320 10:49:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:12.320 10:49:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:12.320 10:49:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:12.320 10:49:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:12.320 10:49:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.889 10:49:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:12.889 10:49:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:12.889 10:49:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.889 10:49:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.889 [2024-11-15 10:49:59.699578] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:12.889 [2024-11-15 10:49:59.699998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59583 ] 00:06:12.890 { 00:06:12.890 "subsystems": [ 00:06:12.890 { 00:06:12.890 "subsystem": "bdev", 00:06:12.890 "config": [ 00:06:12.890 { 00:06:12.890 "params": { 00:06:12.890 "trtype": "pcie", 00:06:12.890 "traddr": "0000:00:10.0", 00:06:12.890 "name": "Nvme0" 00:06:12.890 }, 00:06:12.890 "method": "bdev_nvme_attach_controller" 00:06:12.890 }, 00:06:12.890 { 00:06:12.890 "method": "bdev_wait_for_examine" 00:06:12.890 } 00:06:12.890 ] 00:06:12.890 } 00:06:12.890 ] 00:06:12.890 } 00:06:13.148 [2024-11-15 10:49:59.844563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.148 [2024-11-15 10:49:59.920610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.148 [2024-11-15 10:49:59.993143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.407  [2024-11-15T10:50:00.526Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:13.665 00:06:13.665 10:50:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:13.665 10:50:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:13.665 10:50:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.665 10:50:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.665 { 00:06:13.665 "subsystems": [ 00:06:13.665 { 00:06:13.665 "subsystem": "bdev", 00:06:13.665 "config": [ 00:06:13.665 { 00:06:13.665 "params": { 00:06:13.665 "trtype": "pcie", 00:06:13.665 "traddr": "0000:00:10.0", 00:06:13.665 "name": "Nvme0" 00:06:13.665 }, 00:06:13.665 "method": "bdev_nvme_attach_controller" 00:06:13.665 }, 00:06:13.665 { 00:06:13.665 "method": "bdev_wait_for_examine" 00:06:13.665 } 00:06:13.665 ] 00:06:13.665 } 00:06:13.665 ] 00:06:13.665 } 00:06:13.665 [2024-11-15 10:50:00.410788] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:13.665 [2024-11-15 10:50:00.410886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59601 ] 00:06:13.924 [2024-11-15 10:50:00.555825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.924 [2024-11-15 10:50:00.619945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.924 [2024-11-15 10:50:00.690752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.182  [2024-11-15T10:50:01.301Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:14.440 00:06:14.440 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.440 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:14.440 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:14.440 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:14.440 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:14.440 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:14.440 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:14.440 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:14.440 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:14.440 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.440 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.440 { 00:06:14.440 "subsystems": [ 00:06:14.440 { 00:06:14.440 "subsystem": "bdev", 00:06:14.440 "config": [ 00:06:14.440 { 00:06:14.440 "params": { 00:06:14.440 "trtype": "pcie", 00:06:14.440 "traddr": "0000:00:10.0", 00:06:14.440 "name": "Nvme0" 00:06:14.440 }, 00:06:14.440 "method": "bdev_nvme_attach_controller" 00:06:14.440 }, 00:06:14.440 { 00:06:14.440 "method": "bdev_wait_for_examine" 00:06:14.440 } 00:06:14.440 ] 00:06:14.440 } 00:06:14.440 ] 00:06:14.440 } 00:06:14.440 [2024-11-15 10:50:01.137965] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:14.440 [2024-11-15 10:50:01.138600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59618 ] 00:06:14.440 [2024-11-15 10:50:01.283030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.697 [2024-11-15 10:50:01.339682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.697 [2024-11-15 10:50:01.411119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.697  [2024-11-15T10:50:01.817Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:14.957 00:06:14.957 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:14.957 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:14.957 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:14.957 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:14.957 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:14.957 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:14.957 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:14.957 10:50:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.895 10:50:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:15.895 10:50:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:15.895 10:50:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.895 10:50:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.895 [2024-11-15 10:50:02.447416] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:15.895 [2024-11-15 10:50:02.447726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59637 ] 00:06:15.895 { 00:06:15.895 "subsystems": [ 00:06:15.895 { 00:06:15.895 "subsystem": "bdev", 00:06:15.895 "config": [ 00:06:15.895 { 00:06:15.895 "params": { 00:06:15.895 "trtype": "pcie", 00:06:15.895 "traddr": "0000:00:10.0", 00:06:15.895 "name": "Nvme0" 00:06:15.895 }, 00:06:15.895 "method": "bdev_nvme_attach_controller" 00:06:15.895 }, 00:06:15.895 { 00:06:15.895 "method": "bdev_wait_for_examine" 00:06:15.895 } 00:06:15.895 ] 00:06:15.895 } 00:06:15.895 ] 00:06:15.895 } 00:06:15.895 [2024-11-15 10:50:02.591376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.895 [2024-11-15 10:50:02.665273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.895 [2024-11-15 10:50:02.736886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.154  [2024-11-15T10:50:03.274Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:16.413 00:06:16.413 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:16.413 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:16.413 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:16.413 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.413 [2024-11-15 10:50:03.181782] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:16.413 [2024-11-15 10:50:03.181892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59656 ] 00:06:16.413 { 00:06:16.413 "subsystems": [ 00:06:16.413 { 00:06:16.413 "subsystem": "bdev", 00:06:16.413 "config": [ 00:06:16.413 { 00:06:16.413 "params": { 00:06:16.413 "trtype": "pcie", 00:06:16.413 "traddr": "0000:00:10.0", 00:06:16.413 "name": "Nvme0" 00:06:16.413 }, 00:06:16.413 "method": "bdev_nvme_attach_controller" 00:06:16.413 }, 00:06:16.413 { 00:06:16.413 "method": "bdev_wait_for_examine" 00:06:16.413 } 00:06:16.413 ] 00:06:16.413 } 00:06:16.413 ] 00:06:16.413 } 00:06:16.672 [2024-11-15 10:50:03.326944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.672 [2024-11-15 10:50:03.373126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.672 [2024-11-15 10:50:03.446981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.932  [2024-11-15T10:50:04.052Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:17.191 00:06:17.191 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.191 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:17.192 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:17.192 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:17.192 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:17.192 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:17.192 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:17.192 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:17.192 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:17.192 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.192 10:50:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.192 [2024-11-15 10:50:03.883808] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:17.192 [2024-11-15 10:50:03.884216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59677 ] 00:06:17.192 { 00:06:17.192 "subsystems": [ 00:06:17.192 { 00:06:17.192 "subsystem": "bdev", 00:06:17.192 "config": [ 00:06:17.192 { 00:06:17.192 "params": { 00:06:17.192 "trtype": "pcie", 00:06:17.192 "traddr": "0000:00:10.0", 00:06:17.192 "name": "Nvme0" 00:06:17.192 }, 00:06:17.192 "method": "bdev_nvme_attach_controller" 00:06:17.192 }, 00:06:17.192 { 00:06:17.192 "method": "bdev_wait_for_examine" 00:06:17.192 } 00:06:17.192 ] 00:06:17.192 } 00:06:17.192 ] 00:06:17.192 } 00:06:17.192 [2024-11-15 10:50:04.020194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.450 [2024-11-15 10:50:04.095753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.450 [2024-11-15 10:50:04.167999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.450  [2024-11-15T10:50:04.570Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:17.709 00:06:17.709 10:50:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:17.709 10:50:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:17.709 10:50:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:17.709 10:50:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:17.709 10:50:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:17.709 10:50:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:17.709 10:50:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.277 10:50:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:18.277 10:50:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:18.277 10:50:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.277 10:50:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.277 [2024-11-15 10:50:05.071558] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:18.277 [2024-11-15 10:50:05.071819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59696 ] 00:06:18.277 { 00:06:18.277 "subsystems": [ 00:06:18.277 { 00:06:18.277 "subsystem": "bdev", 00:06:18.277 "config": [ 00:06:18.277 { 00:06:18.277 "params": { 00:06:18.277 "trtype": "pcie", 00:06:18.277 "traddr": "0000:00:10.0", 00:06:18.277 "name": "Nvme0" 00:06:18.277 }, 00:06:18.277 "method": "bdev_nvme_attach_controller" 00:06:18.277 }, 00:06:18.277 { 00:06:18.277 "method": "bdev_wait_for_examine" 00:06:18.277 } 00:06:18.277 ] 00:06:18.277 } 00:06:18.277 ] 00:06:18.277 } 00:06:18.537 [2024-11-15 10:50:05.216081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.537 [2024-11-15 10:50:05.272587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.537 [2024-11-15 10:50:05.344928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.796  [2024-11-15T10:50:05.916Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:19.055 00:06:19.055 10:50:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:19.055 10:50:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:19.055 10:50:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.055 10:50:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.055 { 00:06:19.055 "subsystems": [ 00:06:19.055 { 00:06:19.055 "subsystem": "bdev", 00:06:19.055 "config": [ 00:06:19.055 { 00:06:19.055 "params": { 00:06:19.055 "trtype": "pcie", 00:06:19.055 "traddr": "0000:00:10.0", 00:06:19.055 "name": "Nvme0" 00:06:19.055 }, 00:06:19.055 "method": "bdev_nvme_attach_controller" 00:06:19.055 }, 00:06:19.055 { 00:06:19.055 "method": "bdev_wait_for_examine" 00:06:19.055 } 00:06:19.055 ] 00:06:19.055 } 00:06:19.055 ] 00:06:19.055 } 00:06:19.055 [2024-11-15 10:50:05.785691] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:19.055 [2024-11-15 10:50:05.785804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59709 ] 00:06:19.313 [2024-11-15 10:50:05.927889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.313 [2024-11-15 10:50:05.987579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.313 [2024-11-15 10:50:06.057800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.572  [2024-11-15T10:50:06.433Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:19.572 00:06:19.572 10:50:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.572 10:50:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:19.572 10:50:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:19.831 10:50:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:19.831 10:50:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:19.831 10:50:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:19.831 10:50:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:19.831 10:50:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:19.831 10:50:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:19.831 10:50:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.831 10:50:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.831 { 00:06:19.831 "subsystems": [ 00:06:19.831 { 00:06:19.831 "subsystem": "bdev", 00:06:19.831 "config": [ 00:06:19.831 { 00:06:19.831 "params": { 00:06:19.831 "trtype": "pcie", 00:06:19.831 "traddr": "0000:00:10.0", 00:06:19.831 "name": "Nvme0" 00:06:19.831 }, 00:06:19.831 "method": "bdev_nvme_attach_controller" 00:06:19.831 }, 00:06:19.831 { 00:06:19.831 "method": "bdev_wait_for_examine" 00:06:19.831 } 00:06:19.831 ] 00:06:19.831 } 00:06:19.831 ] 00:06:19.831 } 00:06:19.831 [2024-11-15 10:50:06.487875] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:19.831 [2024-11-15 10:50:06.488012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59725 ] 00:06:19.831 [2024-11-15 10:50:06.635735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.831 [2024-11-15 10:50:06.686430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.090 [2024-11-15 10:50:06.757461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.091  [2024-11-15T10:50:07.211Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:20.350 00:06:20.350 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:20.350 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:20.350 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:20.350 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:20.350 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:20.350 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:20.350 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:20.350 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.917 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:20.917 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:20.917 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.917 10:50:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.917 [2024-11-15 10:50:07.670721] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:20.917 [2024-11-15 10:50:07.670813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59744 ] 00:06:20.917 { 00:06:20.917 "subsystems": [ 00:06:20.917 { 00:06:20.917 "subsystem": "bdev", 00:06:20.917 "config": [ 00:06:20.917 { 00:06:20.917 "params": { 00:06:20.917 "trtype": "pcie", 00:06:20.917 "traddr": "0000:00:10.0", 00:06:20.917 "name": "Nvme0" 00:06:20.917 }, 00:06:20.917 "method": "bdev_nvme_attach_controller" 00:06:20.917 }, 00:06:20.917 { 00:06:20.917 "method": "bdev_wait_for_examine" 00:06:20.917 } 00:06:20.917 ] 00:06:20.917 } 00:06:20.917 ] 00:06:20.917 } 00:06:21.176 [2024-11-15 10:50:07.814885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.176 [2024-11-15 10:50:07.868871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.176 [2024-11-15 10:50:07.940126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.434  [2024-11-15T10:50:08.554Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:21.693 00:06:21.693 10:50:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:21.693 10:50:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:21.693 10:50:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:21.693 10:50:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.693 { 00:06:21.693 "subsystems": [ 00:06:21.693 { 00:06:21.693 "subsystem": "bdev", 00:06:21.693 "config": [ 00:06:21.693 { 00:06:21.693 "params": { 00:06:21.693 "trtype": "pcie", 00:06:21.693 "traddr": "0000:00:10.0", 00:06:21.693 "name": "Nvme0" 00:06:21.693 }, 00:06:21.693 "method": "bdev_nvme_attach_controller" 00:06:21.693 }, 00:06:21.693 { 00:06:21.693 "method": "bdev_wait_for_examine" 00:06:21.693 } 00:06:21.693 ] 00:06:21.693 } 00:06:21.693 ] 00:06:21.693 } 00:06:21.693 [2024-11-15 10:50:08.361095] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:21.693 [2024-11-15 10:50:08.361338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59763 ] 00:06:21.693 [2024-11-15 10:50:08.503497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.693 [2024-11-15 10:50:08.548015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.951 [2024-11-15 10:50:08.617879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.951  [2024-11-15T10:50:09.071Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:22.210 00:06:22.210 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.210 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:22.210 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:22.210 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:22.210 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:22.210 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:22.210 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:22.210 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:22.210 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:22.210 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.210 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.210 { 00:06:22.210 "subsystems": [ 00:06:22.210 { 00:06:22.210 "subsystem": "bdev", 00:06:22.210 "config": [ 00:06:22.210 { 00:06:22.210 "params": { 00:06:22.210 "trtype": "pcie", 00:06:22.210 "traddr": "0000:00:10.0", 00:06:22.210 "name": "Nvme0" 00:06:22.210 }, 00:06:22.210 "method": "bdev_nvme_attach_controller" 00:06:22.210 }, 00:06:22.210 { 00:06:22.210 "method": "bdev_wait_for_examine" 00:06:22.210 } 00:06:22.210 ] 00:06:22.210 } 00:06:22.210 ] 00:06:22.210 } 00:06:22.210 [2024-11-15 10:50:09.064811] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:22.210 [2024-11-15 10:50:09.065052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59784 ] 00:06:22.469 [2024-11-15 10:50:09.207306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.469 [2024-11-15 10:50:09.256547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.469 [2024-11-15 10:50:09.326422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.727  [2024-11-15T10:50:09.847Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:22.986 00:06:22.986 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:22.986 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:22.986 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:22.986 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:22.986 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:22.986 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:22.986 10:50:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.244 10:50:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:23.244 10:50:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:23.244 10:50:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:23.244 10:50:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.503 { 00:06:23.503 "subsystems": [ 00:06:23.503 { 00:06:23.503 "subsystem": "bdev", 00:06:23.503 "config": [ 00:06:23.503 { 00:06:23.503 "params": { 00:06:23.503 "trtype": "pcie", 00:06:23.503 "traddr": "0000:00:10.0", 00:06:23.503 "name": "Nvme0" 00:06:23.503 }, 00:06:23.504 "method": "bdev_nvme_attach_controller" 00:06:23.504 }, 00:06:23.504 { 00:06:23.504 "method": "bdev_wait_for_examine" 00:06:23.504 } 00:06:23.504 ] 00:06:23.504 } 00:06:23.504 ] 00:06:23.504 } 00:06:23.504 [2024-11-15 10:50:10.120261] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:23.504 [2024-11-15 10:50:10.120609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59803 ] 00:06:23.504 [2024-11-15 10:50:10.267169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.504 [2024-11-15 10:50:10.330301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.762 [2024-11-15 10:50:10.400452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.762  [2024-11-15T10:50:10.882Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:24.021 00:06:24.021 10:50:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:24.021 10:50:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:24.021 10:50:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:24.021 10:50:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.021 [2024-11-15 10:50:10.830695] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:24.021 [2024-11-15 10:50:10.830801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59811 ] 00:06:24.021 { 00:06:24.021 "subsystems": [ 00:06:24.021 { 00:06:24.021 "subsystem": "bdev", 00:06:24.021 "config": [ 00:06:24.021 { 00:06:24.021 "params": { 00:06:24.021 "trtype": "pcie", 00:06:24.021 "traddr": "0000:00:10.0", 00:06:24.021 "name": "Nvme0" 00:06:24.021 }, 00:06:24.021 "method": "bdev_nvme_attach_controller" 00:06:24.021 }, 00:06:24.021 { 00:06:24.021 "method": "bdev_wait_for_examine" 00:06:24.021 } 00:06:24.021 ] 00:06:24.021 } 00:06:24.021 ] 00:06:24.021 } 00:06:24.280 [2024-11-15 10:50:10.975610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.280 [2024-11-15 10:50:11.024642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.280 [2024-11-15 10:50:11.092142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.538  [2024-11-15T10:50:11.658Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:24.797 00:06:24.797 10:50:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:24.797 10:50:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:24.797 10:50:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:24.797 10:50:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:24.797 10:50:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:24.797 10:50:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:24.797 10:50:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:24.797 10:50:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:24.797 10:50:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:24.797 10:50:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:24.797 10:50:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.797 [2024-11-15 10:50:11.513056] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:24.797 [2024-11-15 10:50:11.513146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:06:24.797 { 00:06:24.797 "subsystems": [ 00:06:24.797 { 00:06:24.797 "subsystem": "bdev", 00:06:24.797 "config": [ 00:06:24.797 { 00:06:24.797 "params": { 00:06:24.797 "trtype": "pcie", 00:06:24.797 "traddr": "0000:00:10.0", 00:06:24.797 "name": "Nvme0" 00:06:24.797 }, 00:06:24.797 "method": "bdev_nvme_attach_controller" 00:06:24.797 }, 00:06:24.797 { 00:06:24.797 "method": "bdev_wait_for_examine" 00:06:24.797 } 00:06:24.797 ] 00:06:24.797 } 00:06:24.797 ] 00:06:24.797 } 00:06:25.056 [2024-11-15 10:50:11.658307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.056 [2024-11-15 10:50:11.701796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.056 [2024-11-15 10:50:11.771153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.056  [2024-11-15T10:50:12.175Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:25.314 00:06:25.314 ************************************ 00:06:25.314 END TEST dd_rw 00:06:25.314 ************************************ 00:06:25.314 00:06:25.314 real 0m15.702s 00:06:25.314 user 0m11.430s 00:06:25.314 sys 0m6.477s 00:06:25.314 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.314 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.573 10:50:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:25.573 10:50:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.573 10:50:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.573 10:50:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.573 ************************************ 00:06:25.573 START TEST dd_rw_offset 00:06:25.573 ************************************ 00:06:25.573 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:06:25.573 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:25.573 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:25.573 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:25.573 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:25.573 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:25.574 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=v6hf2juwregywewa1ewth4g6h5fvc6a07goked71api085pukvito42nwm2fsuqdhhol0ivnpfwcwxo6vw0uw6xnrmk98p61tzn05kw3excne7k4b5j02o263hfju5d4m70p3o1570tkalcaz6awykqvcm1yrhae6sry5x0gr2cmh905tcdctn06rhoyawa3ltgedazx00b91v2rujju3tbiv3v0evwuapejqxt2yai979ckrouu7az8r2gb7qsfxe90e9qk3pe6krexplc9dc4cp7x0ztunz7hkzxgq4jpw5te5qdtpz463i0zrlkiwkg9l6aiiqhu4wnl4z1afrs2fskpdubnudyt48mkx7cbtk68jidn71hodkvi9kzx0rlu5m1skrfzimk3uwucqo9qgvqoo3az58iffw0w71vvf45ogrvxj1b0yogg71j1f82i6p01l48vub9hmpdxoy3u5dnlahnio5dfdqdz6dzxg3o5ngx4bbpp6csz0jst5txesppryln3w5efy5tbxde8x64v367u3iu5lfu5j15p87j9ycwy1l5zzxcqz249xrzr1na3x861urwy9t78wpzmsfm9fr7r8xo751xh5el4fxtb2hx7x61srhveo28efo51tzxu546wi61ss7d252p2tejhdhwfhrphyedxydsnv1wevbnhom7uq2jhgtohvxfjar7uwn7jp7o33ujuhdjm0plxn6qa3snazghc62rzemasbk8f40pk1nq9xzx8ufmftxawbgy757r7309n39xaviv26vo04ecnxsupxghaufrrsc580ep4jdnsofxr31vjsde6phm0desmb0n0chgw6pmkm1vi7e2g7bvkv8a2e5pksltm2iatlkdatzfmcgsqvply2fqzzrtwlxxkq9wa7qzjnri4818n03mvm6bqth07qt5c8671y06ao0uc5axzzrdoh4el9ottyxiyfomamjl8fj9o2w6cddsowdw9rlejbascv42mvof54a2m8cyhkffjl4b8evto6vmbk0m182ewiqcfhqc14ow90gisejv7fythbep5dk4z1vb4geaqiukux6mqro7mmdhyw4dekeajkldy8ox91slt0b1k4yi0efilobv6q1eajmgyj76m74l0prj1yyd410zwl95d5c9nssn964dwh82wyftmcfjvhv7kz7hnpftex38j62ae32ee91gk0wuktc04amr1eiii378jb07m9dm32cy2hksi77skyvona0zg2jhj9j1fcc33jvuaypxdidi0e7vvc47fb99fa1cxl361mddityaf2hgrwcjtukqskx14jbub7rfdcqi7977bojkmy6u6plefw4sftchgnt6rqnx2ymeor8hqzinmpv909kpn2xfbyuvgckv577lu8mbbtdf63xhe0n26pm35illk8smgebtr48rvgfipuxk1s3bobn6u2z3kic4hzdt6937hojl7uvk5alwzdpyaj741nm1b0diucai71i2qxj9a9lzie5rjyq9x92i8nneyyp8rf6pa7di78ryvf8l42zlq5dgmgiiaq68q4ixmocj8ic0yy9evl97jgtac22rv0mli3wbfv38oe7ekljjsr2vljffhh0i1a8h169gpdmj5uv7mlnjfi4yjpx0s1wxr4dpfqne53ehnnyty0ufjpnjjld700nmegg21kvx4ovuzq8wdhk09wt7lpkd8chusn3jzzlwklpz990np8vc95tngium6ix4mf2iaga4qe6cm9xejqwosk10te2p1dwn643q5oon41qm54uyo1gig3tvxv620uj2ghx293g2cg2stavh87x1ojn2iv2t8le6ezxndn1poau4thwj852whsn9ro3yg3pe3i39on3qwlxq44gw682ash1u2mdkp95ax3aahjlfdq2lop69ciebywqxhn1ny5s8volkqieksrpag41i1wjld85cg6fhmkhd4oc78ivprr2mahgisfw3bl5yzarkoz4lt4w22iuph4212grj76ov4ucf5s5lf294d8mfhih6491azuo3k72wd7pmjftcsvy0jy8do05pq6jf7ba2f98vkgfkbvl7ricm2b5qpzu84wjoyg6m023x2msh590hts3v0fhwngqjzaad21ribi7qr1jbu0szumunywfggwir2353ncnuufje6r481lvf2fjb8oewsanwoi6a6qhbhmm3in9nkpzmp8f4uon2w8orgiqxydh77aucj21xk73yprae0pw6b4v628t2pwvh1gzaqdes3wluz7p0xeufw2jzacw227s43et3dq0ve17lkyidwb5zrhxlj8y6u0pmjkn7qon3o2luwh51g6yass5d3nbaemfuuhqoqd8d2t5w2l9n438ufkjpumti28oxdmn02uhw6lbcci2rooh0xbq60gdhyly35tdy4zs1tlmik63uvhct77zc5u64d6mytaw7lnh0fhk4gc7kxi984upther9f2y8szv12yffv3jv4u74hvy0p3pj3rsyetu1h3hw1v7jaxrxktdw69ngsfsn0ywd6hy4oj9gfhqi17jy7s2yq6da91291bp3ln8rok2k3ir7f4689zyn2m06hh10aemakgt7koewirx6ojz0da6khi84y9c1v0s24qo5gbt5n97licabp9z0rjbfxbvg67kc2vb0ial1kbsv5q7anbds9flwk3uw66662mvhhfjux91vdjwwumlf3jl6mztn6r5fjit35x10yolk5hn4ecyh0tp6a4lw8t54wngk1g9z00dzse20qqxec6warmlg89hcxccysghscp6eptjmjcwtdcmm4vcr3b77kzzzwbzqhxfe8ek632hmjwdn55ooev9eyysolsfph357n121o2d36icxgjbhkryx7qibpvs4w9vcfpj4yl75swpsf04sk1hp9d21gupsa9yyhz7htjlweikz9d1d9cj5o2hj0uk3jhw6b969l7jacivr1x5fo1nbagqej4hadd9jvs9tymgaibll7amlp65fl57gg2qcklkqzopp04brif0fttppsi3r9o07b7tg4ve2c35dz5qzyw6jd30whpze7e9t2htqo81awb786s9fp1wsd4rqtxc9mnw2nnt3y1ws9owtjsi6r5wyvhhlu1xgxzred17md92ptnptclairpgt12ezol0s8hhd2zfjyi72oerbye0ehs6tcfp99t3rx3y3ff0uoxoqsi0e7w9iapao8dobdfdzwnk82u0qhx3kem2s8itf60q0xc44nqxy0w5hzl0azfpvi4fjcmj4ftgotr29vmmupctpboivhc8ekga4nraosfahq7srqlc154o4sdwi6e2k4r6ku85cod1nm10e7njg7li7ka6irb7hrogahq1bmtgpjxyrw50yizjmf6vufijblodmnp929rvzl0ss3wwtfc1q03slrjdig95xix5fehimh2jdw6ntevynrqwds66g8p51w3pesrrv3yd0euq4md1lwron313ny8avc3629n2mcm06iskgfkbl77bjkt7vvtxk9x6shqwiwgmmjbkht1jwfemodtk36a61yt2ey8ozxubgrd8oerty17th31ek5ltecvqcfnpw56gtc0rem72awjdn9krc8v61ubhje8rkiqtaf6a60doue8od8ci3i40ymxw4sarmqdmtplcas7xvvz5mkabx9dsglwygp1y6xgzggo193nu5her2w53daghsvvx8iwh8d8lwp7x1pbhsi6or92ttrkb7al112sovb2mwdpbfnzx7uqncwc8uahn9x6qeep3yad3nbagbdbuyj7j6xw30nxc8xc9rk3cduj9b6on32i60d74k10eof3vo2zyj6fbbv6106gv46rbg5uu0jxl9bgy7l1o8oanvd4v5pw5d8bd7b7jg27zscwhpx4sp668j9k2nsdnuccnk24f24ukrhl30ernl16g6mq2z72746tknkt2xqfrdbbuk92hppnyg5f137o0lkvj9fvmgils8y544wqiylbyg40a2tybd5u6dog 00:06:25.574 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:25.574 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:25.574 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:25.574 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:25.574 [2024-11-15 10:50:12.293132] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:25.574 [2024-11-15 10:50:12.293394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59868 ] 00:06:25.574 { 00:06:25.574 "subsystems": [ 00:06:25.574 { 00:06:25.574 "subsystem": "bdev", 00:06:25.574 "config": [ 00:06:25.574 { 00:06:25.574 "params": { 00:06:25.574 "trtype": "pcie", 00:06:25.574 "traddr": "0000:00:10.0", 00:06:25.574 "name": "Nvme0" 00:06:25.574 }, 00:06:25.574 "method": "bdev_nvme_attach_controller" 00:06:25.574 }, 00:06:25.574 { 00:06:25.574 "method": "bdev_wait_for_examine" 00:06:25.574 } 00:06:25.574 ] 00:06:25.574 } 00:06:25.574 ] 00:06:25.574 } 00:06:25.833 [2024-11-15 10:50:12.438289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.833 [2024-11-15 10:50:12.484602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.833 [2024-11-15 10:50:12.552263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.833  [2024-11-15T10:50:12.954Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:26.093 00:06:26.093 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:26.093 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:26.093 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:26.093 10:50:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:26.353 { 00:06:26.353 "subsystems": [ 00:06:26.353 { 00:06:26.353 "subsystem": "bdev", 00:06:26.353 "config": [ 00:06:26.353 { 00:06:26.353 "params": { 00:06:26.353 "trtype": "pcie", 00:06:26.353 "traddr": "0000:00:10.0", 00:06:26.353 "name": "Nvme0" 00:06:26.353 }, 00:06:26.353 "method": "bdev_nvme_attach_controller" 00:06:26.353 }, 00:06:26.353 { 00:06:26.353 "method": "bdev_wait_for_examine" 00:06:26.353 } 00:06:26.353 ] 00:06:26.353 } 00:06:26.353 ] 00:06:26.353 } 00:06:26.353 [2024-11-15 10:50:12.976837] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:26.353 [2024-11-15 10:50:12.976935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:06:26.353 [2024-11-15 10:50:13.119199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.353 [2024-11-15 10:50:13.165418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.613 [2024-11-15 10:50:13.233767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.613  [2024-11-15T10:50:13.733Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:26.872 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:26.872 ************************************ 00:06:26.872 END TEST dd_rw_offset 00:06:26.872 ************************************ 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ v6hf2juwregywewa1ewth4g6h5fvc6a07goked71api085pukvito42nwm2fsuqdhhol0ivnpfwcwxo6vw0uw6xnrmk98p61tzn05kw3excne7k4b5j02o263hfju5d4m70p3o1570tkalcaz6awykqvcm1yrhae6sry5x0gr2cmh905tcdctn06rhoyawa3ltgedazx00b91v2rujju3tbiv3v0evwuapejqxt2yai979ckrouu7az8r2gb7qsfxe90e9qk3pe6krexplc9dc4cp7x0ztunz7hkzxgq4jpw5te5qdtpz463i0zrlkiwkg9l6aiiqhu4wnl4z1afrs2fskpdubnudyt48mkx7cbtk68jidn71hodkvi9kzx0rlu5m1skrfzimk3uwucqo9qgvqoo3az58iffw0w71vvf45ogrvxj1b0yogg71j1f82i6p01l48vub9hmpdxoy3u5dnlahnio5dfdqdz6dzxg3o5ngx4bbpp6csz0jst5txesppryln3w5efy5tbxde8x64v367u3iu5lfu5j15p87j9ycwy1l5zzxcqz249xrzr1na3x861urwy9t78wpzmsfm9fr7r8xo751xh5el4fxtb2hx7x61srhveo28efo51tzxu546wi61ss7d252p2tejhdhwfhrphyedxydsnv1wevbnhom7uq2jhgtohvxfjar7uwn7jp7o33ujuhdjm0plxn6qa3snazghc62rzemasbk8f40pk1nq9xzx8ufmftxawbgy757r7309n39xaviv26vo04ecnxsupxghaufrrsc580ep4jdnsofxr31vjsde6phm0desmb0n0chgw6pmkm1vi7e2g7bvkv8a2e5pksltm2iatlkdatzfmcgsqvply2fqzzrtwlxxkq9wa7qzjnri4818n03mvm6bqth07qt5c8671y06ao0uc5axzzrdoh4el9ottyxiyfomamjl8fj9o2w6cddsowdw9rlejbascv42mvof54a2m8cyhkffjl4b8evto6vmbk0m182ewiqcfhqc14ow90gisejv7fythbep5dk4z1vb4geaqiukux6mqro7mmdhyw4dekeajkldy8ox91slt0b1k4yi0efilobv6q1eajmgyj76m74l0prj1yyd410zwl95d5c9nssn964dwh82wyftmcfjvhv7kz7hnpftex38j62ae32ee91gk0wuktc04amr1eiii378jb07m9dm32cy2hksi77skyvona0zg2jhj9j1fcc33jvuaypxdidi0e7vvc47fb99fa1cxl361mddityaf2hgrwcjtukqskx14jbub7rfdcqi7977bojkmy6u6plefw4sftchgnt6rqnx2ymeor8hqzinmpv909kpn2xfbyuvgckv577lu8mbbtdf63xhe0n26pm35illk8smgebtr48rvgfipuxk1s3bobn6u2z3kic4hzdt6937hojl7uvk5alwzdpyaj741nm1b0diucai71i2qxj9a9lzie5rjyq9x92i8nneyyp8rf6pa7di78ryvf8l42zlq5dgmgiiaq68q4ixmocj8ic0yy9evl97jgtac22rv0mli3wbfv38oe7ekljjsr2vljffhh0i1a8h169gpdmj5uv7mlnjfi4yjpx0s1wxr4dpfqne53ehnnyty0ufjpnjjld700nmegg21kvx4ovuzq8wdhk09wt7lpkd8chusn3jzzlwklpz990np8vc95tngium6ix4mf2iaga4qe6cm9xejqwosk10te2p1dwn643q5oon41qm54uyo1gig3tvxv620uj2ghx293g2cg2stavh87x1ojn2iv2t8le6ezxndn1poau4thwj852whsn9ro3yg3pe3i39on3qwlxq44gw682ash1u2mdkp95ax3aahjlfdq2lop69ciebywqxhn1ny5s8volkqieksrpag41i1wjld85cg6fhmkhd4oc78ivprr2mahgisfw3bl5yzarkoz4lt4w22iuph4212grj76ov4ucf5s5lf294d8mfhih6491azuo3k72wd7pmjftcsvy0jy8do05pq6jf7ba2f98vkgfkbvl7ricm2b5qpzu84wjoyg6m023x2msh590hts3v0fhwngqjzaad21ribi7qr1jbu0szumunywfggwir2353ncnuufje6r481lvf2fjb8oewsanwoi6a6qhbhmm3in9nkpzmp8f4uon2w8orgiqxydh77aucj21xk73yprae0pw6b4v628t2pwvh1gzaqdes3wluz7p0xeufw2jzacw227s43et3dq0ve17lkyidwb5zrhxlj8y6u0pmjkn7qon3o2luwh51g6yass5d3nbaemfuuhqoqd8d2t5w2l9n438ufkjpumti28oxdmn02uhw6lbcci2rooh0xbq60gdhyly35tdy4zs1tlmik63uvhct77zc5u64d6mytaw7lnh0fhk4gc7kxi984upther9f2y8szv12yffv3jv4u74hvy0p3pj3rsyetu1h3hw1v7jaxrxktdw69ngsfsn0ywd6hy4oj9gfhqi17jy7s2yq6da91291bp3ln8rok2k3ir7f4689zyn2m06hh10aemakgt7koewirx6ojz0da6khi84y9c1v0s24qo5gbt5n97licabp9z0rjbfxbvg67kc2vb0ial1kbsv5q7anbds9flwk3uw66662mvhhfjux91vdjwwumlf3jl6mztn6r5fjit35x10yolk5hn4ecyh0tp6a4lw8t54wngk1g9z00dzse20qqxec6warmlg89hcxccysghscp6eptjmjcwtdcmm4vcr3b77kzzzwbzqhxfe8ek632hmjwdn55ooev9eyysolsfph357n121o2d36icxgjbhkryx7qibpvs4w9vcfpj4yl75swpsf04sk1hp9d21gupsa9yyhz7htjlweikz9d1d9cj5o2hj0uk3jhw6b969l7jacivr1x5fo1nbagqej4hadd9jvs9tymgaibll7amlp65fl57gg2qcklkqzopp04brif0fttppsi3r9o07b7tg4ve2c35dz5qzyw6jd30whpze7e9t2htqo81awb786s9fp1wsd4rqtxc9mnw2nnt3y1ws9owtjsi6r5wyvhhlu1xgxzred17md92ptnptclairpgt12ezol0s8hhd2zfjyi72oerbye0ehs6tcfp99t3rx3y3ff0uoxoqsi0e7w9iapao8dobdfdzwnk82u0qhx3kem2s8itf60q0xc44nqxy0w5hzl0azfpvi4fjcmj4ftgotr29vmmupctpboivhc8ekga4nraosfahq7srqlc154o4sdwi6e2k4r6ku85cod1nm10e7njg7li7ka6irb7hrogahq1bmtgpjxyrw50yizjmf6vufijblodmnp929rvzl0ss3wwtfc1q03slrjdig95xix5fehimh2jdw6ntevynrqwds66g8p51w3pesrrv3yd0euq4md1lwron313ny8avc3629n2mcm06iskgfkbl77bjkt7vvtxk9x6shqwiwgmmjbkht1jwfemodtk36a61yt2ey8ozxubgrd8oerty17th31ek5ltecvqcfnpw56gtc0rem72awjdn9krc8v61ubhje8rkiqtaf6a60doue8od8ci3i40ymxw4sarmqdmtplcas7xvvz5mkabx9dsglwygp1y6xgzggo193nu5her2w53daghsvvx8iwh8d8lwp7x1pbhsi6or92ttrkb7al112sovb2mwdpbfnzx7uqncwc8uahn9x6qeep3yad3nbagbdbuyj7j6xw30nxc8xc9rk3cduj9b6on32i60d74k10eof3vo2zyj6fbbv6106gv46rbg5uu0jxl9bgy7l1o8oanvd4v5pw5d8bd7b7jg27zscwhpx4sp668j9k2nsdnuccnk24f24ukrhl30ernl16g6mq2z72746tknkt2xqfrdbbuk92hppnyg5f137o0lkvj9fvmgils8y544wqiylbyg40a2tybd5u6dog == \v\6\h\f\2\j\u\w\r\e\g\y\w\e\w\a\1\e\w\t\h\4\g\6\h\5\f\v\c\6\a\0\7\g\o\k\e\d\7\1\a\p\i\0\8\5\p\u\k\v\i\t\o\4\2\n\w\m\2\f\s\u\q\d\h\h\o\l\0\i\v\n\p\f\w\c\w\x\o\6\v\w\0\u\w\6\x\n\r\m\k\9\8\p\6\1\t\z\n\0\5\k\w\3\e\x\c\n\e\7\k\4\b\5\j\0\2\o\2\6\3\h\f\j\u\5\d\4\m\7\0\p\3\o\1\5\7\0\t\k\a\l\c\a\z\6\a\w\y\k\q\v\c\m\1\y\r\h\a\e\6\s\r\y\5\x\0\g\r\2\c\m\h\9\0\5\t\c\d\c\t\n\0\6\r\h\o\y\a\w\a\3\l\t\g\e\d\a\z\x\0\0\b\9\1\v\2\r\u\j\j\u\3\t\b\i\v\3\v\0\e\v\w\u\a\p\e\j\q\x\t\2\y\a\i\9\7\9\c\k\r\o\u\u\7\a\z\8\r\2\g\b\7\q\s\f\x\e\9\0\e\9\q\k\3\p\e\6\k\r\e\x\p\l\c\9\d\c\4\c\p\7\x\0\z\t\u\n\z\7\h\k\z\x\g\q\4\j\p\w\5\t\e\5\q\d\t\p\z\4\6\3\i\0\z\r\l\k\i\w\k\g\9\l\6\a\i\i\q\h\u\4\w\n\l\4\z\1\a\f\r\s\2\f\s\k\p\d\u\b\n\u\d\y\t\4\8\m\k\x\7\c\b\t\k\6\8\j\i\d\n\7\1\h\o\d\k\v\i\9\k\z\x\0\r\l\u\5\m\1\s\k\r\f\z\i\m\k\3\u\w\u\c\q\o\9\q\g\v\q\o\o\3\a\z\5\8\i\f\f\w\0\w\7\1\v\v\f\4\5\o\g\r\v\x\j\1\b\0\y\o\g\g\7\1\j\1\f\8\2\i\6\p\0\1\l\4\8\v\u\b\9\h\m\p\d\x\o\y\3\u\5\d\n\l\a\h\n\i\o\5\d\f\d\q\d\z\6\d\z\x\g\3\o\5\n\g\x\4\b\b\p\p\6\c\s\z\0\j\s\t\5\t\x\e\s\p\p\r\y\l\n\3\w\5\e\f\y\5\t\b\x\d\e\8\x\6\4\v\3\6\7\u\3\i\u\5\l\f\u\5\j\1\5\p\8\7\j\9\y\c\w\y\1\l\5\z\z\x\c\q\z\2\4\9\x\r\z\r\1\n\a\3\x\8\6\1\u\r\w\y\9\t\7\8\w\p\z\m\s\f\m\9\f\r\7\r\8\x\o\7\5\1\x\h\5\e\l\4\f\x\t\b\2\h\x\7\x\6\1\s\r\h\v\e\o\2\8\e\f\o\5\1\t\z\x\u\5\4\6\w\i\6\1\s\s\7\d\2\5\2\p\2\t\e\j\h\d\h\w\f\h\r\p\h\y\e\d\x\y\d\s\n\v\1\w\e\v\b\n\h\o\m\7\u\q\2\j\h\g\t\o\h\v\x\f\j\a\r\7\u\w\n\7\j\p\7\o\3\3\u\j\u\h\d\j\m\0\p\l\x\n\6\q\a\3\s\n\a\z\g\h\c\6\2\r\z\e\m\a\s\b\k\8\f\4\0\p\k\1\n\q\9\x\z\x\8\u\f\m\f\t\x\a\w\b\g\y\7\5\7\r\7\3\0\9\n\3\9\x\a\v\i\v\2\6\v\o\0\4\e\c\n\x\s\u\p\x\g\h\a\u\f\r\r\s\c\5\8\0\e\p\4\j\d\n\s\o\f\x\r\3\1\v\j\s\d\e\6\p\h\m\0\d\e\s\m\b\0\n\0\c\h\g\w\6\p\m\k\m\1\v\i\7\e\2\g\7\b\v\k\v\8\a\2\e\5\p\k\s\l\t\m\2\i\a\t\l\k\d\a\t\z\f\m\c\g\s\q\v\p\l\y\2\f\q\z\z\r\t\w\l\x\x\k\q\9\w\a\7\q\z\j\n\r\i\4\8\1\8\n\0\3\m\v\m\6\b\q\t\h\0\7\q\t\5\c\8\6\7\1\y\0\6\a\o\0\u\c\5\a\x\z\z\r\d\o\h\4\e\l\9\o\t\t\y\x\i\y\f\o\m\a\m\j\l\8\f\j\9\o\2\w\6\c\d\d\s\o\w\d\w\9\r\l\e\j\b\a\s\c\v\4\2\m\v\o\f\5\4\a\2\m\8\c\y\h\k\f\f\j\l\4\b\8\e\v\t\o\6\v\m\b\k\0\m\1\8\2\e\w\i\q\c\f\h\q\c\1\4\o\w\9\0\g\i\s\e\j\v\7\f\y\t\h\b\e\p\5\d\k\4\z\1\v\b\4\g\e\a\q\i\u\k\u\x\6\m\q\r\o\7\m\m\d\h\y\w\4\d\e\k\e\a\j\k\l\d\y\8\o\x\9\1\s\l\t\0\b\1\k\4\y\i\0\e\f\i\l\o\b\v\6\q\1\e\a\j\m\g\y\j\7\6\m\7\4\l\0\p\r\j\1\y\y\d\4\1\0\z\w\l\9\5\d\5\c\9\n\s\s\n\9\6\4\d\w\h\8\2\w\y\f\t\m\c\f\j\v\h\v\7\k\z\7\h\n\p\f\t\e\x\3\8\j\6\2\a\e\3\2\e\e\9\1\g\k\0\w\u\k\t\c\0\4\a\m\r\1\e\i\i\i\3\7\8\j\b\0\7\m\9\d\m\3\2\c\y\2\h\k\s\i\7\7\s\k\y\v\o\n\a\0\z\g\2\j\h\j\9\j\1\f\c\c\3\3\j\v\u\a\y\p\x\d\i\d\i\0\e\7\v\v\c\4\7\f\b\9\9\f\a\1\c\x\l\3\6\1\m\d\d\i\t\y\a\f\2\h\g\r\w\c\j\t\u\k\q\s\k\x\1\4\j\b\u\b\7\r\f\d\c\q\i\7\9\7\7\b\o\j\k\m\y\6\u\6\p\l\e\f\w\4\s\f\t\c\h\g\n\t\6\r\q\n\x\2\y\m\e\o\r\8\h\q\z\i\n\m\p\v\9\0\9\k\p\n\2\x\f\b\y\u\v\g\c\k\v\5\7\7\l\u\8\m\b\b\t\d\f\6\3\x\h\e\0\n\2\6\p\m\3\5\i\l\l\k\8\s\m\g\e\b\t\r\4\8\r\v\g\f\i\p\u\x\k\1\s\3\b\o\b\n\6\u\2\z\3\k\i\c\4\h\z\d\t\6\9\3\7\h\o\j\l\7\u\v\k\5\a\l\w\z\d\p\y\a\j\7\4\1\n\m\1\b\0\d\i\u\c\a\i\7\1\i\2\q\x\j\9\a\9\l\z\i\e\5\r\j\y\q\9\x\9\2\i\8\n\n\e\y\y\p\8\r\f\6\p\a\7\d\i\7\8\r\y\v\f\8\l\4\2\z\l\q\5\d\g\m\g\i\i\a\q\6\8\q\4\i\x\m\o\c\j\8\i\c\0\y\y\9\e\v\l\9\7\j\g\t\a\c\2\2\r\v\0\m\l\i\3\w\b\f\v\3\8\o\e\7\e\k\l\j\j\s\r\2\v\l\j\f\f\h\h\0\i\1\a\8\h\1\6\9\g\p\d\m\j\5\u\v\7\m\l\n\j\f\i\4\y\j\p\x\0\s\1\w\x\r\4\d\p\f\q\n\e\5\3\e\h\n\n\y\t\y\0\u\f\j\p\n\j\j\l\d\7\0\0\n\m\e\g\g\2\1\k\v\x\4\o\v\u\z\q\8\w\d\h\k\0\9\w\t\7\l\p\k\d\8\c\h\u\s\n\3\j\z\z\l\w\k\l\p\z\9\9\0\n\p\8\v\c\9\5\t\n\g\i\u\m\6\i\x\4\m\f\2\i\a\g\a\4\q\e\6\c\m\9\x\e\j\q\w\o\s\k\1\0\t\e\2\p\1\d\w\n\6\4\3\q\5\o\o\n\4\1\q\m\5\4\u\y\o\1\g\i\g\3\t\v\x\v\6\2\0\u\j\2\g\h\x\2\9\3\g\2\c\g\2\s\t\a\v\h\8\7\x\1\o\j\n\2\i\v\2\t\8\l\e\6\e\z\x\n\d\n\1\p\o\a\u\4\t\h\w\j\8\5\2\w\h\s\n\9\r\o\3\y\g\3\p\e\3\i\3\9\o\n\3\q\w\l\x\q\4\4\g\w\6\8\2\a\s\h\1\u\2\m\d\k\p\9\5\a\x\3\a\a\h\j\l\f\d\q\2\l\o\p\6\9\c\i\e\b\y\w\q\x\h\n\1\n\y\5\s\8\v\o\l\k\q\i\e\k\s\r\p\a\g\4\1\i\1\w\j\l\d\8\5\c\g\6\f\h\m\k\h\d\4\o\c\7\8\i\v\p\r\r\2\m\a\h\g\i\s\f\w\3\b\l\5\y\z\a\r\k\o\z\4\l\t\4\w\2\2\i\u\p\h\4\2\1\2\g\r\j\7\6\o\v\4\u\c\f\5\s\5\l\f\2\9\4\d\8\m\f\h\i\h\6\4\9\1\a\z\u\o\3\k\7\2\w\d\7\p\m\j\f\t\c\s\v\y\0\j\y\8\d\o\0\5\p\q\6\j\f\7\b\a\2\f\9\8\v\k\g\f\k\b\v\l\7\r\i\c\m\2\b\5\q\p\z\u\8\4\w\j\o\y\g\6\m\0\2\3\x\2\m\s\h\5\9\0\h\t\s\3\v\0\f\h\w\n\g\q\j\z\a\a\d\2\1\r\i\b\i\7\q\r\1\j\b\u\0\s\z\u\m\u\n\y\w\f\g\g\w\i\r\2\3\5\3\n\c\n\u\u\f\j\e\6\r\4\8\1\l\v\f\2\f\j\b\8\o\e\w\s\a\n\w\o\i\6\a\6\q\h\b\h\m\m\3\i\n\9\n\k\p\z\m\p\8\f\4\u\o\n\2\w\8\o\r\g\i\q\x\y\d\h\7\7\a\u\c\j\2\1\x\k\7\3\y\p\r\a\e\0\p\w\6\b\4\v\6\2\8\t\2\p\w\v\h\1\g\z\a\q\d\e\s\3\w\l\u\z\7\p\0\x\e\u\f\w\2\j\z\a\c\w\2\2\7\s\4\3\e\t\3\d\q\0\v\e\1\7\l\k\y\i\d\w\b\5\z\r\h\x\l\j\8\y\6\u\0\p\m\j\k\n\7\q\o\n\3\o\2\l\u\w\h\5\1\g\6\y\a\s\s\5\d\3\n\b\a\e\m\f\u\u\h\q\o\q\d\8\d\2\t\5\w\2\l\9\n\4\3\8\u\f\k\j\p\u\m\t\i\2\8\o\x\d\m\n\0\2\u\h\w\6\l\b\c\c\i\2\r\o\o\h\0\x\b\q\6\0\g\d\h\y\l\y\3\5\t\d\y\4\z\s\1\t\l\m\i\k\6\3\u\v\h\c\t\7\7\z\c\5\u\6\4\d\6\m\y\t\a\w\7\l\n\h\0\f\h\k\4\g\c\7\k\x\i\9\8\4\u\p\t\h\e\r\9\f\2\y\8\s\z\v\1\2\y\f\f\v\3\j\v\4\u\7\4\h\v\y\0\p\3\p\j\3\r\s\y\e\t\u\1\h\3\h\w\1\v\7\j\a\x\r\x\k\t\d\w\6\9\n\g\s\f\s\n\0\y\w\d\6\h\y\4\o\j\9\g\f\h\q\i\1\7\j\y\7\s\2\y\q\6\d\a\9\1\2\9\1\b\p\3\l\n\8\r\o\k\2\k\3\i\r\7\f\4\6\8\9\z\y\n\2\m\0\6\h\h\1\0\a\e\m\a\k\g\t\7\k\o\e\w\i\r\x\6\o\j\z\0\d\a\6\k\h\i\8\4\y\9\c\1\v\0\s\2\4\q\o\5\g\b\t\5\n\9\7\l\i\c\a\b\p\9\z\0\r\j\b\f\x\b\v\g\6\7\k\c\2\v\b\0\i\a\l\1\k\b\s\v\5\q\7\a\n\b\d\s\9\f\l\w\k\3\u\w\6\6\6\6\2\m\v\h\h\f\j\u\x\9\1\v\d\j\w\w\u\m\l\f\3\j\l\6\m\z\t\n\6\r\5\f\j\i\t\3\5\x\1\0\y\o\l\k\5\h\n\4\e\c\y\h\0\t\p\6\a\4\l\w\8\t\5\4\w\n\g\k\1\g\9\z\0\0\d\z\s\e\2\0\q\q\x\e\c\6\w\a\r\m\l\g\8\9\h\c\x\c\c\y\s\g\h\s\c\p\6\e\p\t\j\m\j\c\w\t\d\c\m\m\4\v\c\r\3\b\7\7\k\z\z\z\w\b\z\q\h\x\f\e\8\e\k\6\3\2\h\m\j\w\d\n\5\5\o\o\e\v\9\e\y\y\s\o\l\s\f\p\h\3\5\7\n\1\2\1\o\2\d\3\6\i\c\x\g\j\b\h\k\r\y\x\7\q\i\b\p\v\s\4\w\9\v\c\f\p\j\4\y\l\7\5\s\w\p\s\f\0\4\s\k\1\h\p\9\d\2\1\g\u\p\s\a\9\y\y\h\z\7\h\t\j\l\w\e\i\k\z\9\d\1\d\9\c\j\5\o\2\h\j\0\u\k\3\j\h\w\6\b\9\6\9\l\7\j\a\c\i\v\r\1\x\5\f\o\1\n\b\a\g\q\e\j\4\h\a\d\d\9\j\v\s\9\t\y\m\g\a\i\b\l\l\7\a\m\l\p\6\5\f\l\5\7\g\g\2\q\c\k\l\k\q\z\o\p\p\0\4\b\r\i\f\0\f\t\t\p\p\s\i\3\r\9\o\0\7\b\7\t\g\4\v\e\2\c\3\5\d\z\5\q\z\y\w\6\j\d\3\0\w\h\p\z\e\7\e\9\t\2\h\t\q\o\8\1\a\w\b\7\8\6\s\9\f\p\1\w\s\d\4\r\q\t\x\c\9\m\n\w\2\n\n\t\3\y\1\w\s\9\o\w\t\j\s\i\6\r\5\w\y\v\h\h\l\u\1\x\g\x\z\r\e\d\1\7\m\d\9\2\p\t\n\p\t\c\l\a\i\r\p\g\t\1\2\e\z\o\l\0\s\8\h\h\d\2\z\f\j\y\i\7\2\o\e\r\b\y\e\0\e\h\s\6\t\c\f\p\9\9\t\3\r\x\3\y\3\f\f\0\u\o\x\o\q\s\i\0\e\7\w\9\i\a\p\a\o\8\d\o\b\d\f\d\z\w\n\k\8\2\u\0\q\h\x\3\k\e\m\2\s\8\i\t\f\6\0\q\0\x\c\4\4\n\q\x\y\0\w\5\h\z\l\0\a\z\f\p\v\i\4\f\j\c\m\j\4\f\t\g\o\t\r\2\9\v\m\m\u\p\c\t\p\b\o\i\v\h\c\8\e\k\g\a\4\n\r\a\o\s\f\a\h\q\7\s\r\q\l\c\1\5\4\o\4\s\d\w\i\6\e\2\k\4\r\6\k\u\8\5\c\o\d\1\n\m\1\0\e\7\n\j\g\7\l\i\7\k\a\6\i\r\b\7\h\r\o\g\a\h\q\1\b\m\t\g\p\j\x\y\r\w\5\0\y\i\z\j\m\f\6\v\u\f\i\j\b\l\o\d\m\n\p\9\2\9\r\v\z\l\0\s\s\3\w\w\t\f\c\1\q\0\3\s\l\r\j\d\i\g\9\5\x\i\x\5\f\e\h\i\m\h\2\j\d\w\6\n\t\e\v\y\n\r\q\w\d\s\6\6\g\8\p\5\1\w\3\p\e\s\r\r\v\3\y\d\0\e\u\q\4\m\d\1\l\w\r\o\n\3\1\3\n\y\8\a\v\c\3\6\2\9\n\2\m\c\m\0\6\i\s\k\g\f\k\b\l\7\7\b\j\k\t\7\v\v\t\x\k\9\x\6\s\h\q\w\i\w\g\m\m\j\b\k\h\t\1\j\w\f\e\m\o\d\t\k\3\6\a\6\1\y\t\2\e\y\8\o\z\x\u\b\g\r\d\8\o\e\r\t\y\1\7\t\h\3\1\e\k\5\l\t\e\c\v\q\c\f\n\p\w\5\6\g\t\c\0\r\e\m\7\2\a\w\j\d\n\9\k\r\c\8\v\6\1\u\b\h\j\e\8\r\k\i\q\t\a\f\6\a\6\0\d\o\u\e\8\o\d\8\c\i\3\i\4\0\y\m\x\w\4\s\a\r\m\q\d\m\t\p\l\c\a\s\7\x\v\v\z\5\m\k\a\b\x\9\d\s\g\l\w\y\g\p\1\y\6\x\g\z\g\g\o\1\9\3\n\u\5\h\e\r\2\w\5\3\d\a\g\h\s\v\v\x\8\i\w\h\8\d\8\l\w\p\7\x\1\p\b\h\s\i\6\o\r\9\2\t\t\r\k\b\7\a\l\1\1\2\s\o\v\b\2\m\w\d\p\b\f\n\z\x\7\u\q\n\c\w\c\8\u\a\h\n\9\x\6\q\e\e\p\3\y\a\d\3\n\b\a\g\b\d\b\u\y\j\7\j\6\x\w\3\0\n\x\c\8\x\c\9\r\k\3\c\d\u\j\9\b\6\o\n\3\2\i\6\0\d\7\4\k\1\0\e\o\f\3\v\o\2\z\y\j\6\f\b\b\v\6\1\0\6\g\v\4\6\r\b\g\5\u\u\0\j\x\l\9\b\g\y\7\l\1\o\8\o\a\n\v\d\4\v\5\p\w\5\d\8\b\d\7\b\7\j\g\2\7\z\s\c\w\h\p\x\4\s\p\6\6\8\j\9\k\2\n\s\d\n\u\c\c\n\k\2\4\f\2\4\u\k\r\h\l\3\0\e\r\n\l\1\6\g\6\m\q\2\z\7\2\7\4\6\t\k\n\k\t\2\x\q\f\r\d\b\b\u\k\9\2\h\p\p\n\y\g\5\f\1\3\7\o\0\l\k\v\j\9\f\v\m\g\i\l\s\8\y\5\4\4\w\q\i\y\l\b\y\g\4\0\a\2\t\y\b\d\5\u\6\d\o\g ]] 00:06:26.872 00:06:26.872 real 0m1.395s 00:06:26.872 user 0m0.947s 00:06:26.872 sys 0m0.694s 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:26.872 10:50:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.872 [2024-11-15 10:50:13.682171] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:26.872 [2024-11-15 10:50:13.682436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59911 ] 00:06:26.872 { 00:06:26.872 "subsystems": [ 00:06:26.872 { 00:06:26.872 "subsystem": "bdev", 00:06:26.872 "config": [ 00:06:26.872 { 00:06:26.872 "params": { 00:06:26.872 "trtype": "pcie", 00:06:26.872 "traddr": "0000:00:10.0", 00:06:26.872 "name": "Nvme0" 00:06:26.872 }, 00:06:26.872 "method": "bdev_nvme_attach_controller" 00:06:26.872 }, 00:06:26.872 { 00:06:26.872 "method": "bdev_wait_for_examine" 00:06:26.872 } 00:06:26.872 ] 00:06:26.872 } 00:06:26.872 ] 00:06:26.872 } 00:06:27.132 [2024-11-15 10:50:13.828602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.132 [2024-11-15 10:50:13.880541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.132 [2024-11-15 10:50:13.950786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.391  [2024-11-15T10:50:14.511Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:27.650 00:06:27.650 10:50:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.650 ************************************ 00:06:27.650 END TEST spdk_dd_basic_rw 00:06:27.650 ************************************ 00:06:27.650 00:06:27.650 real 0m19.069s 00:06:27.650 user 0m13.563s 00:06:27.650 sys 0m7.918s 00:06:27.650 10:50:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.650 10:50:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.650 10:50:14 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:27.650 10:50:14 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.650 10:50:14 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.650 10:50:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:27.650 ************************************ 00:06:27.650 START TEST spdk_dd_posix 00:06:27.650 ************************************ 00:06:27.651 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:27.651 * Looking for test storage... 00:06:27.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:27.651 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.651 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.651 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:27.910 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.911 --rc genhtml_branch_coverage=1 00:06:27.911 --rc genhtml_function_coverage=1 00:06:27.911 --rc genhtml_legend=1 00:06:27.911 --rc geninfo_all_blocks=1 00:06:27.911 --rc geninfo_unexecuted_blocks=1 00:06:27.911 00:06:27.911 ' 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.911 --rc genhtml_branch_coverage=1 00:06:27.911 --rc genhtml_function_coverage=1 00:06:27.911 --rc genhtml_legend=1 00:06:27.911 --rc geninfo_all_blocks=1 00:06:27.911 --rc geninfo_unexecuted_blocks=1 00:06:27.911 00:06:27.911 ' 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.911 --rc genhtml_branch_coverage=1 00:06:27.911 --rc genhtml_function_coverage=1 00:06:27.911 --rc genhtml_legend=1 00:06:27.911 --rc geninfo_all_blocks=1 00:06:27.911 --rc geninfo_unexecuted_blocks=1 00:06:27.911 00:06:27.911 ' 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.911 --rc genhtml_branch_coverage=1 00:06:27.911 --rc genhtml_function_coverage=1 00:06:27.911 --rc genhtml_legend=1 00:06:27.911 --rc geninfo_all_blocks=1 00:06:27.911 --rc geninfo_unexecuted_blocks=1 00:06:27.911 00:06:27.911 ' 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:27.911 * First test run, liburing in use 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:27.911 ************************************ 00:06:27.911 START TEST dd_flag_append 00:06:27.911 ************************************ 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=rj4sx1p47ufrtyrrynu2phqne2n3klce 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=wvzy5drtcoqah45xnhu5ku6qnipkcwcd 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s rj4sx1p47ufrtyrrynu2phqne2n3klce 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s wvzy5drtcoqah45xnhu5ku6qnipkcwcd 00:06:27.911 10:50:14 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:27.911 [2024-11-15 10:50:14.656799] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:27.911 [2024-11-15 10:50:14.656902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59983 ] 00:06:28.169 [2024-11-15 10:50:14.799900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.169 [2024-11-15 10:50:14.859272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.169 [2024-11-15 10:50:14.929584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.169  [2024-11-15T10:50:15.289Z] Copying: 32/32 [B] (average 31 kBps) 00:06:28.428 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ wvzy5drtcoqah45xnhu5ku6qnipkcwcdrj4sx1p47ufrtyrrynu2phqne2n3klce == \w\v\z\y\5\d\r\t\c\o\q\a\h\4\5\x\n\h\u\5\k\u\6\q\n\i\p\k\c\w\c\d\r\j\4\s\x\1\p\4\7\u\f\r\t\y\r\r\y\n\u\2\p\h\q\n\e\2\n\3\k\l\c\e ]] 00:06:28.428 00:06:28.428 real 0m0.634s 00:06:28.428 user 0m0.355s 00:06:28.428 sys 0m0.344s 00:06:28.428 ************************************ 00:06:28.428 END TEST dd_flag_append 00:06:28.428 ************************************ 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:28.428 ************************************ 00:06:28.428 START TEST dd_flag_directory 00:06:28.428 ************************************ 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.428 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.688 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.688 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.688 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:28.688 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.688 [2024-11-15 10:50:15.340734] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:28.688 [2024-11-15 10:50:15.340829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60017 ] 00:06:28.688 [2024-11-15 10:50:15.483668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.688 [2024-11-15 10:50:15.536826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.947 [2024-11-15 10:50:15.607088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.947 [2024-11-15 10:50:15.649794] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:28.947 [2024-11-15 10:50:15.649859] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:28.947 [2024-11-15 10:50:15.649878] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.207 [2024-11-15 10:50:15.808653] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:29.207 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:29.207 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.207 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:29.207 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:29.208 10:50:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:29.208 [2024-11-15 10:50:15.960505] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:29.208 [2024-11-15 10:50:15.960633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60021 ] 00:06:29.476 [2024-11-15 10:50:16.111812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.476 [2024-11-15 10:50:16.178474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.476 [2024-11-15 10:50:16.250127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.476 [2024-11-15 10:50:16.294719] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:29.476 [2024-11-15 10:50:16.294798] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:29.476 [2024-11-15 10:50:16.294817] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.748 [2024-11-15 10:50:16.451353] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.748 00:06:29.748 real 0m1.257s 00:06:29.748 user 0m0.716s 00:06:29.748 sys 0m0.331s 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.748 ************************************ 00:06:29.748 END TEST dd_flag_directory 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:29.748 ************************************ 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:29.748 ************************************ 00:06:29.748 START TEST dd_flag_nofollow 00:06:29.748 ************************************ 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.748 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.007 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.007 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.007 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.007 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.007 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.007 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.007 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.007 10:50:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.007 [2024-11-15 10:50:16.668111] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:30.007 [2024-11-15 10:50:16.668214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60055 ] 00:06:30.007 [2024-11-15 10:50:16.811780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.266 [2024-11-15 10:50:16.873396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.266 [2024-11-15 10:50:16.945632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.266 [2024-11-15 10:50:16.988175] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:30.266 [2024-11-15 10:50:16.988246] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:30.266 [2024-11-15 10:50:16.988277] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.526 [2024-11-15 10:50:17.150886] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.526 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:30.526 [2024-11-15 10:50:17.313421] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:30.526 [2024-11-15 10:50:17.313597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60070 ] 00:06:30.786 [2024-11-15 10:50:17.469365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.786 [2024-11-15 10:50:17.514484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.786 [2024-11-15 10:50:17.583315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.786 [2024-11-15 10:50:17.625461] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:30.786 [2024-11-15 10:50:17.625519] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:30.786 [2024-11-15 10:50:17.625565] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:31.045 [2024-11-15 10:50:17.777477] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:31.045 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:31.045 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.045 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:31.045 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:31.045 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:31.045 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.045 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:31.045 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:31.045 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:31.045 10:50:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.304 [2024-11-15 10:50:17.910468] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:31.304 [2024-11-15 10:50:17.910614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60072 ] 00:06:31.304 [2024-11-15 10:50:18.054686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.304 [2024-11-15 10:50:18.098068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.564 [2024-11-15 10:50:18.168370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.564  [2024-11-15T10:50:18.684Z] Copying: 512/512 [B] (average 500 kBps) 00:06:31.823 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ f6qjx2sf8k01ec15msjtxg68b1hq7n913vbp766b024w41kuv55vur6bjockbgudcumbecmedvgf1f59vviogecqoh72c4g6rkyjcv7w246jk23th301g7ln4mhixq18ku97f2n6t06ox06ry3w2c9h5iicumxioj2l0ommpdes6ow5tgf5kdfvrl76a47tdxrazlu4u0g5kgqkwot026vm6vee392ewk2c80uihorzkekugwphhinwqg3zmeaes3b3h5a96tgw89u1ltyf3zd4qv91apeqonhma2dmn3jn7xflf59rtq3n0yll452cuxuittsd294khagqug329hnn4ebz8oie9ycms2qdi7jgttnmjwi442dpnj06t4bcmcd6af5iuw2m13mqlagdiomvvj009gwvnsr5gvhd5c9nuteiklmt87bonbnonqxnlja3kvq10tynt3xodxqjvx9vuozh897w2nj80s81w1on409ehx681ogxa4meeivkq == \f\6\q\j\x\2\s\f\8\k\0\1\e\c\1\5\m\s\j\t\x\g\6\8\b\1\h\q\7\n\9\1\3\v\b\p\7\6\6\b\0\2\4\w\4\1\k\u\v\5\5\v\u\r\6\b\j\o\c\k\b\g\u\d\c\u\m\b\e\c\m\e\d\v\g\f\1\f\5\9\v\v\i\o\g\e\c\q\o\h\7\2\c\4\g\6\r\k\y\j\c\v\7\w\2\4\6\j\k\2\3\t\h\3\0\1\g\7\l\n\4\m\h\i\x\q\1\8\k\u\9\7\f\2\n\6\t\0\6\o\x\0\6\r\y\3\w\2\c\9\h\5\i\i\c\u\m\x\i\o\j\2\l\0\o\m\m\p\d\e\s\6\o\w\5\t\g\f\5\k\d\f\v\r\l\7\6\a\4\7\t\d\x\r\a\z\l\u\4\u\0\g\5\k\g\q\k\w\o\t\0\2\6\v\m\6\v\e\e\3\9\2\e\w\k\2\c\8\0\u\i\h\o\r\z\k\e\k\u\g\w\p\h\h\i\n\w\q\g\3\z\m\e\a\e\s\3\b\3\h\5\a\9\6\t\g\w\8\9\u\1\l\t\y\f\3\z\d\4\q\v\9\1\a\p\e\q\o\n\h\m\a\2\d\m\n\3\j\n\7\x\f\l\f\5\9\r\t\q\3\n\0\y\l\l\4\5\2\c\u\x\u\i\t\t\s\d\2\9\4\k\h\a\g\q\u\g\3\2\9\h\n\n\4\e\b\z\8\o\i\e\9\y\c\m\s\2\q\d\i\7\j\g\t\t\n\m\j\w\i\4\4\2\d\p\n\j\0\6\t\4\b\c\m\c\d\6\a\f\5\i\u\w\2\m\1\3\m\q\l\a\g\d\i\o\m\v\v\j\0\0\9\g\w\v\n\s\r\5\g\v\h\d\5\c\9\n\u\t\e\i\k\l\m\t\8\7\b\o\n\b\n\o\n\q\x\n\l\j\a\3\k\v\q\1\0\t\y\n\t\3\x\o\d\x\q\j\v\x\9\v\u\o\z\h\8\9\7\w\2\n\j\8\0\s\8\1\w\1\o\n\4\0\9\e\h\x\6\8\1\o\g\x\a\4\m\e\e\i\v\k\q ]] 00:06:31.823 00:06:31.823 real 0m1.851s 00:06:31.823 user 0m1.031s 00:06:31.823 sys 0m0.667s 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:31.823 ************************************ 00:06:31.823 END TEST dd_flag_nofollow 00:06:31.823 ************************************ 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:31.823 ************************************ 00:06:31.823 START TEST dd_flag_noatime 00:06:31.823 ************************************ 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731667818 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731667818 00:06:31.823 10:50:18 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:32.760 10:50:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.760 [2024-11-15 10:50:19.577798] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:32.760 [2024-11-15 10:50:19.577908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60120 ] 00:06:33.020 [2024-11-15 10:50:19.722400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.020 [2024-11-15 10:50:19.768750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.020 [2024-11-15 10:50:19.835705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.279  [2024-11-15T10:50:20.140Z] Copying: 512/512 [B] (average 500 kBps) 00:06:33.279 00:06:33.279 10:50:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.279 10:50:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731667818 )) 00:06:33.279 10:50:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.279 10:50:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731667818 )) 00:06:33.279 10:50:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.537 [2024-11-15 10:50:20.182710] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:33.538 [2024-11-15 10:50:20.182809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60128 ] 00:06:33.538 [2024-11-15 10:50:20.326794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.538 [2024-11-15 10:50:20.370895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.797 [2024-11-15 10:50:20.438276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.797  [2024-11-15T10:50:20.917Z] Copying: 512/512 [B] (average 500 kBps) 00:06:34.056 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731667820 )) 00:06:34.056 00:06:34.056 real 0m2.223s 00:06:34.056 user 0m0.679s 00:06:34.056 sys 0m0.662s 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:34.056 ************************************ 00:06:34.056 END TEST dd_flag_noatime 00:06:34.056 ************************************ 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.056 ************************************ 00:06:34.056 START TEST dd_flags_misc 00:06:34.056 ************************************ 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:34.056 10:50:20 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:34.056 [2024-11-15 10:50:20.841627] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:34.056 [2024-11-15 10:50:20.841730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60162 ] 00:06:34.314 [2024-11-15 10:50:20.986814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.314 [2024-11-15 10:50:21.032697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.314 [2024-11-15 10:50:21.102384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.314  [2024-11-15T10:50:21.435Z] Copying: 512/512 [B] (average 500 kBps) 00:06:34.574 00:06:34.574 10:50:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ j34k6lp20gxqd3niap8g146fmgyho1z09h477ztf6gm94nb1ujnzdp4t89whoiv4idmsreo95ocnc8t2xrurhw5e49ky6x4dfydm822rb0183p5ae12ek9krupvc7mlmpcy94m214f1wmykkahquewany5q9uxzq3h07f37nfrc8x065dlmhxa8p63jh0wbe5ig0bfile5fdl8glo2z5l7epekfvxr2hjwlpp116n2bql2bwhf15i4zksc9lf05nv198jhr91am9sqjhzkfjrlboc8ym45ufpn6lnwmx6lm0oie252gq2qgxebehjhu2e2i3xccvu8i5qgqrt32pckvxp7jr79as5yoornd3zbmvohmikah3pichw0kg5c5e7qxhnacxcv17z0jmfrdrk73hpo2olu8qbcrjewpd7x6gr8ibo7dwe64w1bfinklhh4woaeyux5lg11zclrf72jpckbs36q858s7jhtam4enmo8gs8o1wu28usazxihbn == \j\3\4\k\6\l\p\2\0\g\x\q\d\3\n\i\a\p\8\g\1\4\6\f\m\g\y\h\o\1\z\0\9\h\4\7\7\z\t\f\6\g\m\9\4\n\b\1\u\j\n\z\d\p\4\t\8\9\w\h\o\i\v\4\i\d\m\s\r\e\o\9\5\o\c\n\c\8\t\2\x\r\u\r\h\w\5\e\4\9\k\y\6\x\4\d\f\y\d\m\8\2\2\r\b\0\1\8\3\p\5\a\e\1\2\e\k\9\k\r\u\p\v\c\7\m\l\m\p\c\y\9\4\m\2\1\4\f\1\w\m\y\k\k\a\h\q\u\e\w\a\n\y\5\q\9\u\x\z\q\3\h\0\7\f\3\7\n\f\r\c\8\x\0\6\5\d\l\m\h\x\a\8\p\6\3\j\h\0\w\b\e\5\i\g\0\b\f\i\l\e\5\f\d\l\8\g\l\o\2\z\5\l\7\e\p\e\k\f\v\x\r\2\h\j\w\l\p\p\1\1\6\n\2\b\q\l\2\b\w\h\f\1\5\i\4\z\k\s\c\9\l\f\0\5\n\v\1\9\8\j\h\r\9\1\a\m\9\s\q\j\h\z\k\f\j\r\l\b\o\c\8\y\m\4\5\u\f\p\n\6\l\n\w\m\x\6\l\m\0\o\i\e\2\5\2\g\q\2\q\g\x\e\b\e\h\j\h\u\2\e\2\i\3\x\c\c\v\u\8\i\5\q\g\q\r\t\3\2\p\c\k\v\x\p\7\j\r\7\9\a\s\5\y\o\o\r\n\d\3\z\b\m\v\o\h\m\i\k\a\h\3\p\i\c\h\w\0\k\g\5\c\5\e\7\q\x\h\n\a\c\x\c\v\1\7\z\0\j\m\f\r\d\r\k\7\3\h\p\o\2\o\l\u\8\q\b\c\r\j\e\w\p\d\7\x\6\g\r\8\i\b\o\7\d\w\e\6\4\w\1\b\f\i\n\k\l\h\h\4\w\o\a\e\y\u\x\5\l\g\1\1\z\c\l\r\f\7\2\j\p\c\k\b\s\3\6\q\8\5\8\s\7\j\h\t\a\m\4\e\n\m\o\8\g\s\8\o\1\w\u\2\8\u\s\a\z\x\i\h\b\n ]] 00:06:34.574 10:50:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:34.574 10:50:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:34.574 [2024-11-15 10:50:21.430396] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:34.574 [2024-11-15 10:50:21.430479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60177 ] 00:06:34.833 [2024-11-15 10:50:21.565157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.833 [2024-11-15 10:50:21.615676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.833 [2024-11-15 10:50:21.683842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.091  [2024-11-15T10:50:22.212Z] Copying: 512/512 [B] (average 500 kBps) 00:06:35.351 00:06:35.351 10:50:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ j34k6lp20gxqd3niap8g146fmgyho1z09h477ztf6gm94nb1ujnzdp4t89whoiv4idmsreo95ocnc8t2xrurhw5e49ky6x4dfydm822rb0183p5ae12ek9krupvc7mlmpcy94m214f1wmykkahquewany5q9uxzq3h07f37nfrc8x065dlmhxa8p63jh0wbe5ig0bfile5fdl8glo2z5l7epekfvxr2hjwlpp116n2bql2bwhf15i4zksc9lf05nv198jhr91am9sqjhzkfjrlboc8ym45ufpn6lnwmx6lm0oie252gq2qgxebehjhu2e2i3xccvu8i5qgqrt32pckvxp7jr79as5yoornd3zbmvohmikah3pichw0kg5c5e7qxhnacxcv17z0jmfrdrk73hpo2olu8qbcrjewpd7x6gr8ibo7dwe64w1bfinklhh4woaeyux5lg11zclrf72jpckbs36q858s7jhtam4enmo8gs8o1wu28usazxihbn == \j\3\4\k\6\l\p\2\0\g\x\q\d\3\n\i\a\p\8\g\1\4\6\f\m\g\y\h\o\1\z\0\9\h\4\7\7\z\t\f\6\g\m\9\4\n\b\1\u\j\n\z\d\p\4\t\8\9\w\h\o\i\v\4\i\d\m\s\r\e\o\9\5\o\c\n\c\8\t\2\x\r\u\r\h\w\5\e\4\9\k\y\6\x\4\d\f\y\d\m\8\2\2\r\b\0\1\8\3\p\5\a\e\1\2\e\k\9\k\r\u\p\v\c\7\m\l\m\p\c\y\9\4\m\2\1\4\f\1\w\m\y\k\k\a\h\q\u\e\w\a\n\y\5\q\9\u\x\z\q\3\h\0\7\f\3\7\n\f\r\c\8\x\0\6\5\d\l\m\h\x\a\8\p\6\3\j\h\0\w\b\e\5\i\g\0\b\f\i\l\e\5\f\d\l\8\g\l\o\2\z\5\l\7\e\p\e\k\f\v\x\r\2\h\j\w\l\p\p\1\1\6\n\2\b\q\l\2\b\w\h\f\1\5\i\4\z\k\s\c\9\l\f\0\5\n\v\1\9\8\j\h\r\9\1\a\m\9\s\q\j\h\z\k\f\j\r\l\b\o\c\8\y\m\4\5\u\f\p\n\6\l\n\w\m\x\6\l\m\0\o\i\e\2\5\2\g\q\2\q\g\x\e\b\e\h\j\h\u\2\e\2\i\3\x\c\c\v\u\8\i\5\q\g\q\r\t\3\2\p\c\k\v\x\p\7\j\r\7\9\a\s\5\y\o\o\r\n\d\3\z\b\m\v\o\h\m\i\k\a\h\3\p\i\c\h\w\0\k\g\5\c\5\e\7\q\x\h\n\a\c\x\c\v\1\7\z\0\j\m\f\r\d\r\k\7\3\h\p\o\2\o\l\u\8\q\b\c\r\j\e\w\p\d\7\x\6\g\r\8\i\b\o\7\d\w\e\6\4\w\1\b\f\i\n\k\l\h\h\4\w\o\a\e\y\u\x\5\l\g\1\1\z\c\l\r\f\7\2\j\p\c\k\b\s\3\6\q\8\5\8\s\7\j\h\t\a\m\4\e\n\m\o\8\g\s\8\o\1\w\u\2\8\u\s\a\z\x\i\h\b\n ]] 00:06:35.351 10:50:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:35.351 10:50:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:35.351 [2024-11-15 10:50:22.042467] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:35.351 [2024-11-15 10:50:22.042616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60183 ] 00:06:35.351 [2024-11-15 10:50:22.197883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.610 [2024-11-15 10:50:22.242531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.610 [2024-11-15 10:50:22.311221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.610  [2024-11-15T10:50:22.731Z] Copying: 512/512 [B] (average 166 kBps) 00:06:35.870 00:06:35.870 10:50:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ j34k6lp20gxqd3niap8g146fmgyho1z09h477ztf6gm94nb1ujnzdp4t89whoiv4idmsreo95ocnc8t2xrurhw5e49ky6x4dfydm822rb0183p5ae12ek9krupvc7mlmpcy94m214f1wmykkahquewany5q9uxzq3h07f37nfrc8x065dlmhxa8p63jh0wbe5ig0bfile5fdl8glo2z5l7epekfvxr2hjwlpp116n2bql2bwhf15i4zksc9lf05nv198jhr91am9sqjhzkfjrlboc8ym45ufpn6lnwmx6lm0oie252gq2qgxebehjhu2e2i3xccvu8i5qgqrt32pckvxp7jr79as5yoornd3zbmvohmikah3pichw0kg5c5e7qxhnacxcv17z0jmfrdrk73hpo2olu8qbcrjewpd7x6gr8ibo7dwe64w1bfinklhh4woaeyux5lg11zclrf72jpckbs36q858s7jhtam4enmo8gs8o1wu28usazxihbn == \j\3\4\k\6\l\p\2\0\g\x\q\d\3\n\i\a\p\8\g\1\4\6\f\m\g\y\h\o\1\z\0\9\h\4\7\7\z\t\f\6\g\m\9\4\n\b\1\u\j\n\z\d\p\4\t\8\9\w\h\o\i\v\4\i\d\m\s\r\e\o\9\5\o\c\n\c\8\t\2\x\r\u\r\h\w\5\e\4\9\k\y\6\x\4\d\f\y\d\m\8\2\2\r\b\0\1\8\3\p\5\a\e\1\2\e\k\9\k\r\u\p\v\c\7\m\l\m\p\c\y\9\4\m\2\1\4\f\1\w\m\y\k\k\a\h\q\u\e\w\a\n\y\5\q\9\u\x\z\q\3\h\0\7\f\3\7\n\f\r\c\8\x\0\6\5\d\l\m\h\x\a\8\p\6\3\j\h\0\w\b\e\5\i\g\0\b\f\i\l\e\5\f\d\l\8\g\l\o\2\z\5\l\7\e\p\e\k\f\v\x\r\2\h\j\w\l\p\p\1\1\6\n\2\b\q\l\2\b\w\h\f\1\5\i\4\z\k\s\c\9\l\f\0\5\n\v\1\9\8\j\h\r\9\1\a\m\9\s\q\j\h\z\k\f\j\r\l\b\o\c\8\y\m\4\5\u\f\p\n\6\l\n\w\m\x\6\l\m\0\o\i\e\2\5\2\g\q\2\q\g\x\e\b\e\h\j\h\u\2\e\2\i\3\x\c\c\v\u\8\i\5\q\g\q\r\t\3\2\p\c\k\v\x\p\7\j\r\7\9\a\s\5\y\o\o\r\n\d\3\z\b\m\v\o\h\m\i\k\a\h\3\p\i\c\h\w\0\k\g\5\c\5\e\7\q\x\h\n\a\c\x\c\v\1\7\z\0\j\m\f\r\d\r\k\7\3\h\p\o\2\o\l\u\8\q\b\c\r\j\e\w\p\d\7\x\6\g\r\8\i\b\o\7\d\w\e\6\4\w\1\b\f\i\n\k\l\h\h\4\w\o\a\e\y\u\x\5\l\g\1\1\z\c\l\r\f\7\2\j\p\c\k\b\s\3\6\q\8\5\8\s\7\j\h\t\a\m\4\e\n\m\o\8\g\s\8\o\1\w\u\2\8\u\s\a\z\x\i\h\b\n ]] 00:06:35.870 10:50:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:35.870 10:50:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:35.870 [2024-11-15 10:50:22.652113] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:35.870 [2024-11-15 10:50:22.652204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60199 ] 00:06:36.130 [2024-11-15 10:50:22.790553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.130 [2024-11-15 10:50:22.833232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.130 [2024-11-15 10:50:22.900547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.130  [2024-11-15T10:50:23.250Z] Copying: 512/512 [B] (average 250 kBps) 00:06:36.389 00:06:36.389 10:50:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ j34k6lp20gxqd3niap8g146fmgyho1z09h477ztf6gm94nb1ujnzdp4t89whoiv4idmsreo95ocnc8t2xrurhw5e49ky6x4dfydm822rb0183p5ae12ek9krupvc7mlmpcy94m214f1wmykkahquewany5q9uxzq3h07f37nfrc8x065dlmhxa8p63jh0wbe5ig0bfile5fdl8glo2z5l7epekfvxr2hjwlpp116n2bql2bwhf15i4zksc9lf05nv198jhr91am9sqjhzkfjrlboc8ym45ufpn6lnwmx6lm0oie252gq2qgxebehjhu2e2i3xccvu8i5qgqrt32pckvxp7jr79as5yoornd3zbmvohmikah3pichw0kg5c5e7qxhnacxcv17z0jmfrdrk73hpo2olu8qbcrjewpd7x6gr8ibo7dwe64w1bfinklhh4woaeyux5lg11zclrf72jpckbs36q858s7jhtam4enmo8gs8o1wu28usazxihbn == \j\3\4\k\6\l\p\2\0\g\x\q\d\3\n\i\a\p\8\g\1\4\6\f\m\g\y\h\o\1\z\0\9\h\4\7\7\z\t\f\6\g\m\9\4\n\b\1\u\j\n\z\d\p\4\t\8\9\w\h\o\i\v\4\i\d\m\s\r\e\o\9\5\o\c\n\c\8\t\2\x\r\u\r\h\w\5\e\4\9\k\y\6\x\4\d\f\y\d\m\8\2\2\r\b\0\1\8\3\p\5\a\e\1\2\e\k\9\k\r\u\p\v\c\7\m\l\m\p\c\y\9\4\m\2\1\4\f\1\w\m\y\k\k\a\h\q\u\e\w\a\n\y\5\q\9\u\x\z\q\3\h\0\7\f\3\7\n\f\r\c\8\x\0\6\5\d\l\m\h\x\a\8\p\6\3\j\h\0\w\b\e\5\i\g\0\b\f\i\l\e\5\f\d\l\8\g\l\o\2\z\5\l\7\e\p\e\k\f\v\x\r\2\h\j\w\l\p\p\1\1\6\n\2\b\q\l\2\b\w\h\f\1\5\i\4\z\k\s\c\9\l\f\0\5\n\v\1\9\8\j\h\r\9\1\a\m\9\s\q\j\h\z\k\f\j\r\l\b\o\c\8\y\m\4\5\u\f\p\n\6\l\n\w\m\x\6\l\m\0\o\i\e\2\5\2\g\q\2\q\g\x\e\b\e\h\j\h\u\2\e\2\i\3\x\c\c\v\u\8\i\5\q\g\q\r\t\3\2\p\c\k\v\x\p\7\j\r\7\9\a\s\5\y\o\o\r\n\d\3\z\b\m\v\o\h\m\i\k\a\h\3\p\i\c\h\w\0\k\g\5\c\5\e\7\q\x\h\n\a\c\x\c\v\1\7\z\0\j\m\f\r\d\r\k\7\3\h\p\o\2\o\l\u\8\q\b\c\r\j\e\w\p\d\7\x\6\g\r\8\i\b\o\7\d\w\e\6\4\w\1\b\f\i\n\k\l\h\h\4\w\o\a\e\y\u\x\5\l\g\1\1\z\c\l\r\f\7\2\j\p\c\k\b\s\3\6\q\8\5\8\s\7\j\h\t\a\m\4\e\n\m\o\8\g\s\8\o\1\w\u\2\8\u\s\a\z\x\i\h\b\n ]] 00:06:36.389 10:50:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:36.389 10:50:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:36.389 10:50:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:36.389 10:50:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:36.389 10:50:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:36.389 10:50:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:36.389 [2024-11-15 10:50:23.246631] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:36.389 [2024-11-15 10:50:23.246725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60203 ] 00:06:36.648 [2024-11-15 10:50:23.391292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.648 [2024-11-15 10:50:23.435695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.648 [2024-11-15 10:50:23.502583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.907  [2024-11-15T10:50:23.768Z] Copying: 512/512 [B] (average 500 kBps) 00:06:36.907 00:06:37.167 10:50:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ goo7m29eu47jjrnibw9253z67nbhg2vc9gbq2y95b4a4qq57m4ao3evl6lb6kgm97dv08y6pfm02ml58i65fzzrxp4pn3wz511wxp8zaihh03i6otumak5v2tse458vlhq6ya69wigg8x2f9too6dqgai3omx1ssg1k0n3p6ka2rd8xtki4uh9ribp5ca0hn99ajt2w0mbaa0gd6uyibhjrvwswji49jv0wabb7ob4it0ym7ialxwnbirbflm10km4zcflicyuu4fktyu54l0pry1qkdqj0h1j4yhhv5j39w4ihb11rw16iacqfg4insclmuc1rpn27s5or2v70m7m181uo3i8bnl0q4g394n6hnjwir4676n8ixyltjqlqoav62g4njt55ywnytd7mzm9e102gsy11urg6l1t5p8w4tj4krelymlvessfh097fw0tzyrtez0pm2m8xlouitkdidzdsgtehf6hs48sfmbzi5yycr4u3cbvs94b7v23za == \g\o\o\7\m\2\9\e\u\4\7\j\j\r\n\i\b\w\9\2\5\3\z\6\7\n\b\h\g\2\v\c\9\g\b\q\2\y\9\5\b\4\a\4\q\q\5\7\m\4\a\o\3\e\v\l\6\l\b\6\k\g\m\9\7\d\v\0\8\y\6\p\f\m\0\2\m\l\5\8\i\6\5\f\z\z\r\x\p\4\p\n\3\w\z\5\1\1\w\x\p\8\z\a\i\h\h\0\3\i\6\o\t\u\m\a\k\5\v\2\t\s\e\4\5\8\v\l\h\q\6\y\a\6\9\w\i\g\g\8\x\2\f\9\t\o\o\6\d\q\g\a\i\3\o\m\x\1\s\s\g\1\k\0\n\3\p\6\k\a\2\r\d\8\x\t\k\i\4\u\h\9\r\i\b\p\5\c\a\0\h\n\9\9\a\j\t\2\w\0\m\b\a\a\0\g\d\6\u\y\i\b\h\j\r\v\w\s\w\j\i\4\9\j\v\0\w\a\b\b\7\o\b\4\i\t\0\y\m\7\i\a\l\x\w\n\b\i\r\b\f\l\m\1\0\k\m\4\z\c\f\l\i\c\y\u\u\4\f\k\t\y\u\5\4\l\0\p\r\y\1\q\k\d\q\j\0\h\1\j\4\y\h\h\v\5\j\3\9\w\4\i\h\b\1\1\r\w\1\6\i\a\c\q\f\g\4\i\n\s\c\l\m\u\c\1\r\p\n\2\7\s\5\o\r\2\v\7\0\m\7\m\1\8\1\u\o\3\i\8\b\n\l\0\q\4\g\3\9\4\n\6\h\n\j\w\i\r\4\6\7\6\n\8\i\x\y\l\t\j\q\l\q\o\a\v\6\2\g\4\n\j\t\5\5\y\w\n\y\t\d\7\m\z\m\9\e\1\0\2\g\s\y\1\1\u\r\g\6\l\1\t\5\p\8\w\4\t\j\4\k\r\e\l\y\m\l\v\e\s\s\f\h\0\9\7\f\w\0\t\z\y\r\t\e\z\0\p\m\2\m\8\x\l\o\u\i\t\k\d\i\d\z\d\s\g\t\e\h\f\6\h\s\4\8\s\f\m\b\z\i\5\y\y\c\r\4\u\3\c\b\v\s\9\4\b\7\v\2\3\z\a ]] 00:06:37.167 10:50:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:37.167 10:50:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:37.167 [2024-11-15 10:50:23.825137] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:37.167 [2024-11-15 10:50:23.825255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60218 ] 00:06:37.167 [2024-11-15 10:50:23.970047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.167 [2024-11-15 10:50:24.012748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.425 [2024-11-15 10:50:24.080125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.425  [2024-11-15T10:50:24.545Z] Copying: 512/512 [B] (average 500 kBps) 00:06:37.684 00:06:37.684 10:50:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ goo7m29eu47jjrnibw9253z67nbhg2vc9gbq2y95b4a4qq57m4ao3evl6lb6kgm97dv08y6pfm02ml58i65fzzrxp4pn3wz511wxp8zaihh03i6otumak5v2tse458vlhq6ya69wigg8x2f9too6dqgai3omx1ssg1k0n3p6ka2rd8xtki4uh9ribp5ca0hn99ajt2w0mbaa0gd6uyibhjrvwswji49jv0wabb7ob4it0ym7ialxwnbirbflm10km4zcflicyuu4fktyu54l0pry1qkdqj0h1j4yhhv5j39w4ihb11rw16iacqfg4insclmuc1rpn27s5or2v70m7m181uo3i8bnl0q4g394n6hnjwir4676n8ixyltjqlqoav62g4njt55ywnytd7mzm9e102gsy11urg6l1t5p8w4tj4krelymlvessfh097fw0tzyrtez0pm2m8xlouitkdidzdsgtehf6hs48sfmbzi5yycr4u3cbvs94b7v23za == \g\o\o\7\m\2\9\e\u\4\7\j\j\r\n\i\b\w\9\2\5\3\z\6\7\n\b\h\g\2\v\c\9\g\b\q\2\y\9\5\b\4\a\4\q\q\5\7\m\4\a\o\3\e\v\l\6\l\b\6\k\g\m\9\7\d\v\0\8\y\6\p\f\m\0\2\m\l\5\8\i\6\5\f\z\z\r\x\p\4\p\n\3\w\z\5\1\1\w\x\p\8\z\a\i\h\h\0\3\i\6\o\t\u\m\a\k\5\v\2\t\s\e\4\5\8\v\l\h\q\6\y\a\6\9\w\i\g\g\8\x\2\f\9\t\o\o\6\d\q\g\a\i\3\o\m\x\1\s\s\g\1\k\0\n\3\p\6\k\a\2\r\d\8\x\t\k\i\4\u\h\9\r\i\b\p\5\c\a\0\h\n\9\9\a\j\t\2\w\0\m\b\a\a\0\g\d\6\u\y\i\b\h\j\r\v\w\s\w\j\i\4\9\j\v\0\w\a\b\b\7\o\b\4\i\t\0\y\m\7\i\a\l\x\w\n\b\i\r\b\f\l\m\1\0\k\m\4\z\c\f\l\i\c\y\u\u\4\f\k\t\y\u\5\4\l\0\p\r\y\1\q\k\d\q\j\0\h\1\j\4\y\h\h\v\5\j\3\9\w\4\i\h\b\1\1\r\w\1\6\i\a\c\q\f\g\4\i\n\s\c\l\m\u\c\1\r\p\n\2\7\s\5\o\r\2\v\7\0\m\7\m\1\8\1\u\o\3\i\8\b\n\l\0\q\4\g\3\9\4\n\6\h\n\j\w\i\r\4\6\7\6\n\8\i\x\y\l\t\j\q\l\q\o\a\v\6\2\g\4\n\j\t\5\5\y\w\n\y\t\d\7\m\z\m\9\e\1\0\2\g\s\y\1\1\u\r\g\6\l\1\t\5\p\8\w\4\t\j\4\k\r\e\l\y\m\l\v\e\s\s\f\h\0\9\7\f\w\0\t\z\y\r\t\e\z\0\p\m\2\m\8\x\l\o\u\i\t\k\d\i\d\z\d\s\g\t\e\h\f\6\h\s\4\8\s\f\m\b\z\i\5\y\y\c\r\4\u\3\c\b\v\s\9\4\b\7\v\2\3\z\a ]] 00:06:37.684 10:50:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:37.684 10:50:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:37.684 [2024-11-15 10:50:24.421419] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:37.684 [2024-11-15 10:50:24.421520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60228 ] 00:06:37.943 [2024-11-15 10:50:24.568431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.943 [2024-11-15 10:50:24.613339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.943 [2024-11-15 10:50:24.680196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.943  [2024-11-15T10:50:25.063Z] Copying: 512/512 [B] (average 250 kBps) 00:06:38.202 00:06:38.203 10:50:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ goo7m29eu47jjrnibw9253z67nbhg2vc9gbq2y95b4a4qq57m4ao3evl6lb6kgm97dv08y6pfm02ml58i65fzzrxp4pn3wz511wxp8zaihh03i6otumak5v2tse458vlhq6ya69wigg8x2f9too6dqgai3omx1ssg1k0n3p6ka2rd8xtki4uh9ribp5ca0hn99ajt2w0mbaa0gd6uyibhjrvwswji49jv0wabb7ob4it0ym7ialxwnbirbflm10km4zcflicyuu4fktyu54l0pry1qkdqj0h1j4yhhv5j39w4ihb11rw16iacqfg4insclmuc1rpn27s5or2v70m7m181uo3i8bnl0q4g394n6hnjwir4676n8ixyltjqlqoav62g4njt55ywnytd7mzm9e102gsy11urg6l1t5p8w4tj4krelymlvessfh097fw0tzyrtez0pm2m8xlouitkdidzdsgtehf6hs48sfmbzi5yycr4u3cbvs94b7v23za == \g\o\o\7\m\2\9\e\u\4\7\j\j\r\n\i\b\w\9\2\5\3\z\6\7\n\b\h\g\2\v\c\9\g\b\q\2\y\9\5\b\4\a\4\q\q\5\7\m\4\a\o\3\e\v\l\6\l\b\6\k\g\m\9\7\d\v\0\8\y\6\p\f\m\0\2\m\l\5\8\i\6\5\f\z\z\r\x\p\4\p\n\3\w\z\5\1\1\w\x\p\8\z\a\i\h\h\0\3\i\6\o\t\u\m\a\k\5\v\2\t\s\e\4\5\8\v\l\h\q\6\y\a\6\9\w\i\g\g\8\x\2\f\9\t\o\o\6\d\q\g\a\i\3\o\m\x\1\s\s\g\1\k\0\n\3\p\6\k\a\2\r\d\8\x\t\k\i\4\u\h\9\r\i\b\p\5\c\a\0\h\n\9\9\a\j\t\2\w\0\m\b\a\a\0\g\d\6\u\y\i\b\h\j\r\v\w\s\w\j\i\4\9\j\v\0\w\a\b\b\7\o\b\4\i\t\0\y\m\7\i\a\l\x\w\n\b\i\r\b\f\l\m\1\0\k\m\4\z\c\f\l\i\c\y\u\u\4\f\k\t\y\u\5\4\l\0\p\r\y\1\q\k\d\q\j\0\h\1\j\4\y\h\h\v\5\j\3\9\w\4\i\h\b\1\1\r\w\1\6\i\a\c\q\f\g\4\i\n\s\c\l\m\u\c\1\r\p\n\2\7\s\5\o\r\2\v\7\0\m\7\m\1\8\1\u\o\3\i\8\b\n\l\0\q\4\g\3\9\4\n\6\h\n\j\w\i\r\4\6\7\6\n\8\i\x\y\l\t\j\q\l\q\o\a\v\6\2\g\4\n\j\t\5\5\y\w\n\y\t\d\7\m\z\m\9\e\1\0\2\g\s\y\1\1\u\r\g\6\l\1\t\5\p\8\w\4\t\j\4\k\r\e\l\y\m\l\v\e\s\s\f\h\0\9\7\f\w\0\t\z\y\r\t\e\z\0\p\m\2\m\8\x\l\o\u\i\t\k\d\i\d\z\d\s\g\t\e\h\f\6\h\s\4\8\s\f\m\b\z\i\5\y\y\c\r\4\u\3\c\b\v\s\9\4\b\7\v\2\3\z\a ]] 00:06:38.203 10:50:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.203 10:50:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:38.203 [2024-11-15 10:50:25.016655] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:38.203 [2024-11-15 10:50:25.016754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60237 ] 00:06:38.462 [2024-11-15 10:50:25.160137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.462 [2024-11-15 10:50:25.212581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.462 [2024-11-15 10:50:25.281579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.721  [2024-11-15T10:50:25.582Z] Copying: 512/512 [B] (average 250 kBps) 00:06:38.721 00:06:38.721 10:50:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ goo7m29eu47jjrnibw9253z67nbhg2vc9gbq2y95b4a4qq57m4ao3evl6lb6kgm97dv08y6pfm02ml58i65fzzrxp4pn3wz511wxp8zaihh03i6otumak5v2tse458vlhq6ya69wigg8x2f9too6dqgai3omx1ssg1k0n3p6ka2rd8xtki4uh9ribp5ca0hn99ajt2w0mbaa0gd6uyibhjrvwswji49jv0wabb7ob4it0ym7ialxwnbirbflm10km4zcflicyuu4fktyu54l0pry1qkdqj0h1j4yhhv5j39w4ihb11rw16iacqfg4insclmuc1rpn27s5or2v70m7m181uo3i8bnl0q4g394n6hnjwir4676n8ixyltjqlqoav62g4njt55ywnytd7mzm9e102gsy11urg6l1t5p8w4tj4krelymlvessfh097fw0tzyrtez0pm2m8xlouitkdidzdsgtehf6hs48sfmbzi5yycr4u3cbvs94b7v23za == \g\o\o\7\m\2\9\e\u\4\7\j\j\r\n\i\b\w\9\2\5\3\z\6\7\n\b\h\g\2\v\c\9\g\b\q\2\y\9\5\b\4\a\4\q\q\5\7\m\4\a\o\3\e\v\l\6\l\b\6\k\g\m\9\7\d\v\0\8\y\6\p\f\m\0\2\m\l\5\8\i\6\5\f\z\z\r\x\p\4\p\n\3\w\z\5\1\1\w\x\p\8\z\a\i\h\h\0\3\i\6\o\t\u\m\a\k\5\v\2\t\s\e\4\5\8\v\l\h\q\6\y\a\6\9\w\i\g\g\8\x\2\f\9\t\o\o\6\d\q\g\a\i\3\o\m\x\1\s\s\g\1\k\0\n\3\p\6\k\a\2\r\d\8\x\t\k\i\4\u\h\9\r\i\b\p\5\c\a\0\h\n\9\9\a\j\t\2\w\0\m\b\a\a\0\g\d\6\u\y\i\b\h\j\r\v\w\s\w\j\i\4\9\j\v\0\w\a\b\b\7\o\b\4\i\t\0\y\m\7\i\a\l\x\w\n\b\i\r\b\f\l\m\1\0\k\m\4\z\c\f\l\i\c\y\u\u\4\f\k\t\y\u\5\4\l\0\p\r\y\1\q\k\d\q\j\0\h\1\j\4\y\h\h\v\5\j\3\9\w\4\i\h\b\1\1\r\w\1\6\i\a\c\q\f\g\4\i\n\s\c\l\m\u\c\1\r\p\n\2\7\s\5\o\r\2\v\7\0\m\7\m\1\8\1\u\o\3\i\8\b\n\l\0\q\4\g\3\9\4\n\6\h\n\j\w\i\r\4\6\7\6\n\8\i\x\y\l\t\j\q\l\q\o\a\v\6\2\g\4\n\j\t\5\5\y\w\n\y\t\d\7\m\z\m\9\e\1\0\2\g\s\y\1\1\u\r\g\6\l\1\t\5\p\8\w\4\t\j\4\k\r\e\l\y\m\l\v\e\s\s\f\h\0\9\7\f\w\0\t\z\y\r\t\e\z\0\p\m\2\m\8\x\l\o\u\i\t\k\d\i\d\z\d\s\g\t\e\h\f\6\h\s\4\8\s\f\m\b\z\i\5\y\y\c\r\4\u\3\c\b\v\s\9\4\b\7\v\2\3\z\a ]] 00:06:38.721 00:06:38.721 real 0m4.784s 00:06:38.721 user 0m2.710s 00:06:38.721 sys 0m2.573s 00:06:38.721 10:50:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.721 10:50:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:38.721 ************************************ 00:06:38.721 END TEST dd_flags_misc 00:06:38.721 ************************************ 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:38.980 * Second test run, disabling liburing, forcing AIO 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:38.980 ************************************ 00:06:38.980 START TEST dd_flag_append_forced_aio 00:06:38.980 ************************************ 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=b8ici9ykjj37v1fh5q2wz8a5xqb8zjix 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=lgi3i37k1v9p0dzrkgrd2bu3i68k06ag 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s b8ici9ykjj37v1fh5q2wz8a5xqb8zjix 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s lgi3i37k1v9p0dzrkgrd2bu3i68k06ag 00:06:38.980 10:50:25 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:38.980 [2024-11-15 10:50:25.673701] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:38.980 [2024-11-15 10:50:25.673813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60271 ] 00:06:38.981 [2024-11-15 10:50:25.817965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.239 [2024-11-15 10:50:25.862978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.239 [2024-11-15 10:50:25.930321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.239  [2024-11-15T10:50:26.360Z] Copying: 32/32 [B] (average 31 kBps) 00:06:39.499 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ lgi3i37k1v9p0dzrkgrd2bu3i68k06agb8ici9ykjj37v1fh5q2wz8a5xqb8zjix == \l\g\i\3\i\3\7\k\1\v\9\p\0\d\z\r\k\g\r\d\2\b\u\3\i\6\8\k\0\6\a\g\b\8\i\c\i\9\y\k\j\j\3\7\v\1\f\h\5\q\2\w\z\8\a\5\x\q\b\8\z\j\i\x ]] 00:06:39.499 00:06:39.499 real 0m0.610s 00:06:39.499 user 0m0.335s 00:06:39.499 sys 0m0.154s 00:06:39.499 ************************************ 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:39.499 END TEST dd_flag_append_forced_aio 00:06:39.499 ************************************ 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:39.499 ************************************ 00:06:39.499 START TEST dd_flag_directory_forced_aio 00:06:39.499 ************************************ 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:39.499 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.499 [2024-11-15 10:50:26.335958] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:39.500 [2024-11-15 10:50:26.336053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60292 ] 00:06:39.758 [2024-11-15 10:50:26.479446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.758 [2024-11-15 10:50:26.524287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.758 [2024-11-15 10:50:26.592214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.017 [2024-11-15 10:50:26.634147] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:40.017 [2024-11-15 10:50:26.634209] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:40.017 [2024-11-15 10:50:26.634227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.017 [2024-11-15 10:50:26.786395] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:40.017 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:40.017 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.017 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:40.017 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:40.017 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:40.017 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.017 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:40.017 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:40.017 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:40.017 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.276 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.276 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.276 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.276 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.276 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.276 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.276 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:40.276 10:50:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:40.276 [2024-11-15 10:50:26.934090] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:40.276 [2024-11-15 10:50:26.934206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60307 ] 00:06:40.276 [2024-11-15 10:50:27.069480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.276 [2024-11-15 10:50:27.114367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.535 [2024-11-15 10:50:27.181979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.535 [2024-11-15 10:50:27.223443] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:40.535 [2024-11-15 10:50:27.223508] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:40.535 [2024-11-15 10:50:27.223537] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.535 [2024-11-15 10:50:27.379690] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.795 00:06:40.795 real 0m1.193s 00:06:40.795 user 0m0.669s 00:06:40.795 sys 0m0.313s 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.795 ************************************ 00:06:40.795 END TEST dd_flag_directory_forced_aio 00:06:40.795 ************************************ 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:40.795 ************************************ 00:06:40.795 START TEST dd_flag_nofollow_forced_aio 00:06:40.795 ************************************ 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:40.795 10:50:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.795 [2024-11-15 10:50:27.584068] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:40.795 [2024-11-15 10:50:27.584173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60339 ] 00:06:41.054 [2024-11-15 10:50:27.726870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.054 [2024-11-15 10:50:27.772660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.054 [2024-11-15 10:50:27.839613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.054 [2024-11-15 10:50:27.880801] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:41.054 [2024-11-15 10:50:27.880860] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:41.054 [2024-11-15 10:50:27.880879] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.313 [2024-11-15 10:50:28.035065] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:41.313 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:41.572 [2024-11-15 10:50:28.179175] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:41.572 [2024-11-15 10:50:28.179280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60345 ] 00:06:41.572 [2024-11-15 10:50:28.322417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.572 [2024-11-15 10:50:28.366509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.831 [2024-11-15 10:50:28.435477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.831 [2024-11-15 10:50:28.476891] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:41.831 [2024-11-15 10:50:28.476954] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:41.831 [2024-11-15 10:50:28.476973] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.831 [2024-11-15 10:50:28.632216] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:42.173 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:42.173 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.173 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:42.173 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:42.173 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:42.173 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.173 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:42.173 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:42.173 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:42.173 10:50:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.173 [2024-11-15 10:50:28.802599] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:42.173 [2024-11-15 10:50:28.802770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60358 ] 00:06:42.173 [2024-11-15 10:50:28.958509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.444 [2024-11-15 10:50:29.002387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.444 [2024-11-15 10:50:29.069680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.444  [2024-11-15T10:50:29.564Z] Copying: 512/512 [B] (average 500 kBps) 00:06:42.703 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ tpak0chrv3m6lkmk6ipf2g14op4d5vc0k1laf9r4gqzxv4yul6txz8uw1w71ggzykanmeye86xopst4lvkymsvy6kkh17w0ck9uir69rnagbhc3uppmmdnafreo0vq5zbi14iswr186w3kd6kevh8nybkj5w5vq2k5poflssvd40wwsndu6x59ybuh3ln8ao70vtjas9dpd2oxvotf686vmmaqqxdogvj44mzr97ev5te59nc9ksi38gbvu5m9lh4klj0eh1ipqekb793mm9vf4fknx6xsz43x7f8f979hq79xhnovfxj892qmogsatl7gfpyu1k7k8e1lskw1aveo3igf7q8aog4b3ugbjq47w5nnivitie4x6j0bnzea4uff6lnx3k4ynslx7rceoo80lkmuvm9brveaug4eo6706op648izqq71h5rwsvnpxqxl8xc267s2ktcp9bc3uma9gx37wtygu94c69xsov0r3z2pe5y9fjm47khfsyl3fn == \t\p\a\k\0\c\h\r\v\3\m\6\l\k\m\k\6\i\p\f\2\g\1\4\o\p\4\d\5\v\c\0\k\1\l\a\f\9\r\4\g\q\z\x\v\4\y\u\l\6\t\x\z\8\u\w\1\w\7\1\g\g\z\y\k\a\n\m\e\y\e\8\6\x\o\p\s\t\4\l\v\k\y\m\s\v\y\6\k\k\h\1\7\w\0\c\k\9\u\i\r\6\9\r\n\a\g\b\h\c\3\u\p\p\m\m\d\n\a\f\r\e\o\0\v\q\5\z\b\i\1\4\i\s\w\r\1\8\6\w\3\k\d\6\k\e\v\h\8\n\y\b\k\j\5\w\5\v\q\2\k\5\p\o\f\l\s\s\v\d\4\0\w\w\s\n\d\u\6\x\5\9\y\b\u\h\3\l\n\8\a\o\7\0\v\t\j\a\s\9\d\p\d\2\o\x\v\o\t\f\6\8\6\v\m\m\a\q\q\x\d\o\g\v\j\4\4\m\z\r\9\7\e\v\5\t\e\5\9\n\c\9\k\s\i\3\8\g\b\v\u\5\m\9\l\h\4\k\l\j\0\e\h\1\i\p\q\e\k\b\7\9\3\m\m\9\v\f\4\f\k\n\x\6\x\s\z\4\3\x\7\f\8\f\9\7\9\h\q\7\9\x\h\n\o\v\f\x\j\8\9\2\q\m\o\g\s\a\t\l\7\g\f\p\y\u\1\k\7\k\8\e\1\l\s\k\w\1\a\v\e\o\3\i\g\f\7\q\8\a\o\g\4\b\3\u\g\b\j\q\4\7\w\5\n\n\i\v\i\t\i\e\4\x\6\j\0\b\n\z\e\a\4\u\f\f\6\l\n\x\3\k\4\y\n\s\l\x\7\r\c\e\o\o\8\0\l\k\m\u\v\m\9\b\r\v\e\a\u\g\4\e\o\6\7\0\6\o\p\6\4\8\i\z\q\q\7\1\h\5\r\w\s\v\n\p\x\q\x\l\8\x\c\2\6\7\s\2\k\t\c\p\9\b\c\3\u\m\a\9\g\x\3\7\w\t\y\g\u\9\4\c\6\9\x\s\o\v\0\r\3\z\2\p\e\5\y\9\f\j\m\4\7\k\h\f\s\y\l\3\f\n ]] 00:06:42.703 00:06:42.703 real 0m1.847s 00:06:42.703 user 0m1.041s 00:06:42.703 sys 0m0.475s 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.703 ************************************ 00:06:42.703 END TEST dd_flag_nofollow_forced_aio 00:06:42.703 ************************************ 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:42.703 ************************************ 00:06:42.703 START TEST dd_flag_noatime_forced_aio 00:06:42.703 ************************************ 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731667829 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731667829 00:06:42.703 10:50:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:43.643 10:50:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.643 [2024-11-15 10:50:30.498161] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:43.643 [2024-11-15 10:50:30.498296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60399 ] 00:06:43.902 [2024-11-15 10:50:30.650034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.902 [2024-11-15 10:50:30.711633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.162 [2024-11-15 10:50:30.783162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.162  [2024-11-15T10:50:31.281Z] Copying: 512/512 [B] (average 500 kBps) 00:06:44.420 00:06:44.420 10:50:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.420 10:50:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731667829 )) 00:06:44.420 10:50:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.420 10:50:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731667829 )) 00:06:44.420 10:50:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.420 [2024-11-15 10:50:31.140501] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:44.420 [2024-11-15 10:50:31.140621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60410 ] 00:06:44.679 [2024-11-15 10:50:31.282991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.679 [2024-11-15 10:50:31.325702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.679 [2024-11-15 10:50:31.392275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.679  [2024-11-15T10:50:31.799Z] Copying: 512/512 [B] (average 500 kBps) 00:06:44.938 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731667831 )) 00:06:44.938 00:06:44.938 real 0m2.275s 00:06:44.938 user 0m0.702s 00:06:44.938 sys 0m0.329s 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:44.938 ************************************ 00:06:44.938 END TEST dd_flag_noatime_forced_aio 00:06:44.938 ************************************ 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:44.938 ************************************ 00:06:44.938 START TEST dd_flags_misc_forced_aio 00:06:44.938 ************************************ 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:44.938 10:50:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:44.939 10:50:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:44.939 10:50:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:44.939 10:50:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:44.939 10:50:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:44.939 10:50:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:45.197 [2024-11-15 10:50:31.805796] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:45.197 [2024-11-15 10:50:31.805878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60443 ] 00:06:45.197 [2024-11-15 10:50:31.950256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.197 [2024-11-15 10:50:31.999294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.456 [2024-11-15 10:50:32.066656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.456  [2024-11-15T10:50:32.576Z] Copying: 512/512 [B] (average 500 kBps) 00:06:45.715 00:06:45.715 10:50:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ism7mj14caigxi5ue5rm0cdov42tj31uppyasmgtgwtccu9bldtwry5d4k06gqb6mpou2q5oxkxtt5f37l5gzctoetifzq4zp2cc9ffg98qviyxeuatse0nll5my07ew886691o9cvrhz304thyunpps71usgjthwpwpabik3i2aerm7surachuqucpx8gzrnbjeqbda6nq9uee5mi2d9kfba8hso05ms99e35wxqkqd4vmzrykbda5a54ifr21zq70xep2hwhr0jow4rxxdgk9gd6nlrjwjewyhl2089cmxg9mr7hzu1vz0hu543xugkhq24l0rjnvwnwcwgbzsu35bgh66crnttgw453fsfeyd63an9sd9782cj1v8y67yg0i1r4b2kaas7y793vg2ivv2ihe4jkhvyutma0rr6akoduu8purdhpq4yiruh2nzo1mpob9ga0nd9h0ms1ghimn4aizip7pcii0oozsijskyqqo6x374lm3cmbxp1v4w == \i\s\m\7\m\j\1\4\c\a\i\g\x\i\5\u\e\5\r\m\0\c\d\o\v\4\2\t\j\3\1\u\p\p\y\a\s\m\g\t\g\w\t\c\c\u\9\b\l\d\t\w\r\y\5\d\4\k\0\6\g\q\b\6\m\p\o\u\2\q\5\o\x\k\x\t\t\5\f\3\7\l\5\g\z\c\t\o\e\t\i\f\z\q\4\z\p\2\c\c\9\f\f\g\9\8\q\v\i\y\x\e\u\a\t\s\e\0\n\l\l\5\m\y\0\7\e\w\8\8\6\6\9\1\o\9\c\v\r\h\z\3\0\4\t\h\y\u\n\p\p\s\7\1\u\s\g\j\t\h\w\p\w\p\a\b\i\k\3\i\2\a\e\r\m\7\s\u\r\a\c\h\u\q\u\c\p\x\8\g\z\r\n\b\j\e\q\b\d\a\6\n\q\9\u\e\e\5\m\i\2\d\9\k\f\b\a\8\h\s\o\0\5\m\s\9\9\e\3\5\w\x\q\k\q\d\4\v\m\z\r\y\k\b\d\a\5\a\5\4\i\f\r\2\1\z\q\7\0\x\e\p\2\h\w\h\r\0\j\o\w\4\r\x\x\d\g\k\9\g\d\6\n\l\r\j\w\j\e\w\y\h\l\2\0\8\9\c\m\x\g\9\m\r\7\h\z\u\1\v\z\0\h\u\5\4\3\x\u\g\k\h\q\2\4\l\0\r\j\n\v\w\n\w\c\w\g\b\z\s\u\3\5\b\g\h\6\6\c\r\n\t\t\g\w\4\5\3\f\s\f\e\y\d\6\3\a\n\9\s\d\9\7\8\2\c\j\1\v\8\y\6\7\y\g\0\i\1\r\4\b\2\k\a\a\s\7\y\7\9\3\v\g\2\i\v\v\2\i\h\e\4\j\k\h\v\y\u\t\m\a\0\r\r\6\a\k\o\d\u\u\8\p\u\r\d\h\p\q\4\y\i\r\u\h\2\n\z\o\1\m\p\o\b\9\g\a\0\n\d\9\h\0\m\s\1\g\h\i\m\n\4\a\i\z\i\p\7\p\c\i\i\0\o\o\z\s\i\j\s\k\y\q\q\o\6\x\3\7\4\l\m\3\c\m\b\x\p\1\v\4\w ]] 00:06:45.715 10:50:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:45.716 10:50:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:45.716 [2024-11-15 10:50:32.397617] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:45.716 [2024-11-15 10:50:32.397698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60446 ] 00:06:45.716 [2024-11-15 10:50:32.536578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.975 [2024-11-15 10:50:32.584697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.975 [2024-11-15 10:50:32.652132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.975  [2024-11-15T10:50:33.096Z] Copying: 512/512 [B] (average 500 kBps) 00:06:46.235 00:06:46.235 10:50:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ism7mj14caigxi5ue5rm0cdov42tj31uppyasmgtgwtccu9bldtwry5d4k06gqb6mpou2q5oxkxtt5f37l5gzctoetifzq4zp2cc9ffg98qviyxeuatse0nll5my07ew886691o9cvrhz304thyunpps71usgjthwpwpabik3i2aerm7surachuqucpx8gzrnbjeqbda6nq9uee5mi2d9kfba8hso05ms99e35wxqkqd4vmzrykbda5a54ifr21zq70xep2hwhr0jow4rxxdgk9gd6nlrjwjewyhl2089cmxg9mr7hzu1vz0hu543xugkhq24l0rjnvwnwcwgbzsu35bgh66crnttgw453fsfeyd63an9sd9782cj1v8y67yg0i1r4b2kaas7y793vg2ivv2ihe4jkhvyutma0rr6akoduu8purdhpq4yiruh2nzo1mpob9ga0nd9h0ms1ghimn4aizip7pcii0oozsijskyqqo6x374lm3cmbxp1v4w == \i\s\m\7\m\j\1\4\c\a\i\g\x\i\5\u\e\5\r\m\0\c\d\o\v\4\2\t\j\3\1\u\p\p\y\a\s\m\g\t\g\w\t\c\c\u\9\b\l\d\t\w\r\y\5\d\4\k\0\6\g\q\b\6\m\p\o\u\2\q\5\o\x\k\x\t\t\5\f\3\7\l\5\g\z\c\t\o\e\t\i\f\z\q\4\z\p\2\c\c\9\f\f\g\9\8\q\v\i\y\x\e\u\a\t\s\e\0\n\l\l\5\m\y\0\7\e\w\8\8\6\6\9\1\o\9\c\v\r\h\z\3\0\4\t\h\y\u\n\p\p\s\7\1\u\s\g\j\t\h\w\p\w\p\a\b\i\k\3\i\2\a\e\r\m\7\s\u\r\a\c\h\u\q\u\c\p\x\8\g\z\r\n\b\j\e\q\b\d\a\6\n\q\9\u\e\e\5\m\i\2\d\9\k\f\b\a\8\h\s\o\0\5\m\s\9\9\e\3\5\w\x\q\k\q\d\4\v\m\z\r\y\k\b\d\a\5\a\5\4\i\f\r\2\1\z\q\7\0\x\e\p\2\h\w\h\r\0\j\o\w\4\r\x\x\d\g\k\9\g\d\6\n\l\r\j\w\j\e\w\y\h\l\2\0\8\9\c\m\x\g\9\m\r\7\h\z\u\1\v\z\0\h\u\5\4\3\x\u\g\k\h\q\2\4\l\0\r\j\n\v\w\n\w\c\w\g\b\z\s\u\3\5\b\g\h\6\6\c\r\n\t\t\g\w\4\5\3\f\s\f\e\y\d\6\3\a\n\9\s\d\9\7\8\2\c\j\1\v\8\y\6\7\y\g\0\i\1\r\4\b\2\k\a\a\s\7\y\7\9\3\v\g\2\i\v\v\2\i\h\e\4\j\k\h\v\y\u\t\m\a\0\r\r\6\a\k\o\d\u\u\8\p\u\r\d\h\p\q\4\y\i\r\u\h\2\n\z\o\1\m\p\o\b\9\g\a\0\n\d\9\h\0\m\s\1\g\h\i\m\n\4\a\i\z\i\p\7\p\c\i\i\0\o\o\z\s\i\j\s\k\y\q\q\o\6\x\3\7\4\l\m\3\c\m\b\x\p\1\v\4\w ]] 00:06:46.235 10:50:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:46.235 10:50:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:46.235 [2024-11-15 10:50:33.011083] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:46.235 [2024-11-15 10:50:33.011175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60459 ] 00:06:46.495 [2024-11-15 10:50:33.154898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.495 [2024-11-15 10:50:33.201671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.495 [2024-11-15 10:50:33.269193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.495  [2024-11-15T10:50:33.616Z] Copying: 512/512 [B] (average 250 kBps) 00:06:46.755 00:06:46.755 10:50:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ism7mj14caigxi5ue5rm0cdov42tj31uppyasmgtgwtccu9bldtwry5d4k06gqb6mpou2q5oxkxtt5f37l5gzctoetifzq4zp2cc9ffg98qviyxeuatse0nll5my07ew886691o9cvrhz304thyunpps71usgjthwpwpabik3i2aerm7surachuqucpx8gzrnbjeqbda6nq9uee5mi2d9kfba8hso05ms99e35wxqkqd4vmzrykbda5a54ifr21zq70xep2hwhr0jow4rxxdgk9gd6nlrjwjewyhl2089cmxg9mr7hzu1vz0hu543xugkhq24l0rjnvwnwcwgbzsu35bgh66crnttgw453fsfeyd63an9sd9782cj1v8y67yg0i1r4b2kaas7y793vg2ivv2ihe4jkhvyutma0rr6akoduu8purdhpq4yiruh2nzo1mpob9ga0nd9h0ms1ghimn4aizip7pcii0oozsijskyqqo6x374lm3cmbxp1v4w == \i\s\m\7\m\j\1\4\c\a\i\g\x\i\5\u\e\5\r\m\0\c\d\o\v\4\2\t\j\3\1\u\p\p\y\a\s\m\g\t\g\w\t\c\c\u\9\b\l\d\t\w\r\y\5\d\4\k\0\6\g\q\b\6\m\p\o\u\2\q\5\o\x\k\x\t\t\5\f\3\7\l\5\g\z\c\t\o\e\t\i\f\z\q\4\z\p\2\c\c\9\f\f\g\9\8\q\v\i\y\x\e\u\a\t\s\e\0\n\l\l\5\m\y\0\7\e\w\8\8\6\6\9\1\o\9\c\v\r\h\z\3\0\4\t\h\y\u\n\p\p\s\7\1\u\s\g\j\t\h\w\p\w\p\a\b\i\k\3\i\2\a\e\r\m\7\s\u\r\a\c\h\u\q\u\c\p\x\8\g\z\r\n\b\j\e\q\b\d\a\6\n\q\9\u\e\e\5\m\i\2\d\9\k\f\b\a\8\h\s\o\0\5\m\s\9\9\e\3\5\w\x\q\k\q\d\4\v\m\z\r\y\k\b\d\a\5\a\5\4\i\f\r\2\1\z\q\7\0\x\e\p\2\h\w\h\r\0\j\o\w\4\r\x\x\d\g\k\9\g\d\6\n\l\r\j\w\j\e\w\y\h\l\2\0\8\9\c\m\x\g\9\m\r\7\h\z\u\1\v\z\0\h\u\5\4\3\x\u\g\k\h\q\2\4\l\0\r\j\n\v\w\n\w\c\w\g\b\z\s\u\3\5\b\g\h\6\6\c\r\n\t\t\g\w\4\5\3\f\s\f\e\y\d\6\3\a\n\9\s\d\9\7\8\2\c\j\1\v\8\y\6\7\y\g\0\i\1\r\4\b\2\k\a\a\s\7\y\7\9\3\v\g\2\i\v\v\2\i\h\e\4\j\k\h\v\y\u\t\m\a\0\r\r\6\a\k\o\d\u\u\8\p\u\r\d\h\p\q\4\y\i\r\u\h\2\n\z\o\1\m\p\o\b\9\g\a\0\n\d\9\h\0\m\s\1\g\h\i\m\n\4\a\i\z\i\p\7\p\c\i\i\0\o\o\z\s\i\j\s\k\y\q\q\o\6\x\3\7\4\l\m\3\c\m\b\x\p\1\v\4\w ]] 00:06:46.755 10:50:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:46.755 10:50:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:47.015 [2024-11-15 10:50:33.637771] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:47.015 [2024-11-15 10:50:33.637889] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60467 ] 00:06:47.015 [2024-11-15 10:50:33.782457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.015 [2024-11-15 10:50:33.825459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.275 [2024-11-15 10:50:33.893819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.275  [2024-11-15T10:50:34.396Z] Copying: 512/512 [B] (average 500 kBps) 00:06:47.535 00:06:47.535 10:50:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ism7mj14caigxi5ue5rm0cdov42tj31uppyasmgtgwtccu9bldtwry5d4k06gqb6mpou2q5oxkxtt5f37l5gzctoetifzq4zp2cc9ffg98qviyxeuatse0nll5my07ew886691o9cvrhz304thyunpps71usgjthwpwpabik3i2aerm7surachuqucpx8gzrnbjeqbda6nq9uee5mi2d9kfba8hso05ms99e35wxqkqd4vmzrykbda5a54ifr21zq70xep2hwhr0jow4rxxdgk9gd6nlrjwjewyhl2089cmxg9mr7hzu1vz0hu543xugkhq24l0rjnvwnwcwgbzsu35bgh66crnttgw453fsfeyd63an9sd9782cj1v8y67yg0i1r4b2kaas7y793vg2ivv2ihe4jkhvyutma0rr6akoduu8purdhpq4yiruh2nzo1mpob9ga0nd9h0ms1ghimn4aizip7pcii0oozsijskyqqo6x374lm3cmbxp1v4w == \i\s\m\7\m\j\1\4\c\a\i\g\x\i\5\u\e\5\r\m\0\c\d\o\v\4\2\t\j\3\1\u\p\p\y\a\s\m\g\t\g\w\t\c\c\u\9\b\l\d\t\w\r\y\5\d\4\k\0\6\g\q\b\6\m\p\o\u\2\q\5\o\x\k\x\t\t\5\f\3\7\l\5\g\z\c\t\o\e\t\i\f\z\q\4\z\p\2\c\c\9\f\f\g\9\8\q\v\i\y\x\e\u\a\t\s\e\0\n\l\l\5\m\y\0\7\e\w\8\8\6\6\9\1\o\9\c\v\r\h\z\3\0\4\t\h\y\u\n\p\p\s\7\1\u\s\g\j\t\h\w\p\w\p\a\b\i\k\3\i\2\a\e\r\m\7\s\u\r\a\c\h\u\q\u\c\p\x\8\g\z\r\n\b\j\e\q\b\d\a\6\n\q\9\u\e\e\5\m\i\2\d\9\k\f\b\a\8\h\s\o\0\5\m\s\9\9\e\3\5\w\x\q\k\q\d\4\v\m\z\r\y\k\b\d\a\5\a\5\4\i\f\r\2\1\z\q\7\0\x\e\p\2\h\w\h\r\0\j\o\w\4\r\x\x\d\g\k\9\g\d\6\n\l\r\j\w\j\e\w\y\h\l\2\0\8\9\c\m\x\g\9\m\r\7\h\z\u\1\v\z\0\h\u\5\4\3\x\u\g\k\h\q\2\4\l\0\r\j\n\v\w\n\w\c\w\g\b\z\s\u\3\5\b\g\h\6\6\c\r\n\t\t\g\w\4\5\3\f\s\f\e\y\d\6\3\a\n\9\s\d\9\7\8\2\c\j\1\v\8\y\6\7\y\g\0\i\1\r\4\b\2\k\a\a\s\7\y\7\9\3\v\g\2\i\v\v\2\i\h\e\4\j\k\h\v\y\u\t\m\a\0\r\r\6\a\k\o\d\u\u\8\p\u\r\d\h\p\q\4\y\i\r\u\h\2\n\z\o\1\m\p\o\b\9\g\a\0\n\d\9\h\0\m\s\1\g\h\i\m\n\4\a\i\z\i\p\7\p\c\i\i\0\o\o\z\s\i\j\s\k\y\q\q\o\6\x\3\7\4\l\m\3\c\m\b\x\p\1\v\4\w ]] 00:06:47.535 10:50:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:47.535 10:50:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:47.535 10:50:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:47.535 10:50:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:47.535 10:50:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:47.535 10:50:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:47.535 [2024-11-15 10:50:34.264169] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:47.535 [2024-11-15 10:50:34.264287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60474 ] 00:06:47.794 [2024-11-15 10:50:34.407122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.794 [2024-11-15 10:50:34.452325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.794 [2024-11-15 10:50:34.521552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.794  [2024-11-15T10:50:34.914Z] Copying: 512/512 [B] (average 500 kBps) 00:06:48.053 00:06:48.053 10:50:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wixfdl3r3xdjwarb75e22ar7y5xb9bhj4dp9ci305g68b6jaeu5jvaqvn1hcl8419hbgmksfbr27ai2rtgy2ibcm7e84oxhicdk40dpi2gfnlujrckddmb7jyn66a94q0hz805arsitszj5vy73qmti7yqdpn4g95nb2mlbx3nhnz27v243v7xjsffsdtq337xk84qb1l37t1nwh1ba97a5hpgocybiawd06o25jt8gssvje2dxr8l7l68s3yeju0ohjmpfupeh7h2mtkym3gfrsjc6o5sbflexulpd2dcybi0fciuvu0m10jybccib8uk18gxho1uvpx8086zn8chuynwj1dtn0hc0vw6vh72byhua2vbks9cpw6kt7ilqx25aogyymz41ydo6nmssvnquqk1g110rc8o90pi0fz14smgiuilgil1dlhbkb8x08jaeatvspklhw4t700823byw703wz5114hg5s30b80qmzxidp8nb8ea16ww9hy933 == \w\i\x\f\d\l\3\r\3\x\d\j\w\a\r\b\7\5\e\2\2\a\r\7\y\5\x\b\9\b\h\j\4\d\p\9\c\i\3\0\5\g\6\8\b\6\j\a\e\u\5\j\v\a\q\v\n\1\h\c\l\8\4\1\9\h\b\g\m\k\s\f\b\r\2\7\a\i\2\r\t\g\y\2\i\b\c\m\7\e\8\4\o\x\h\i\c\d\k\4\0\d\p\i\2\g\f\n\l\u\j\r\c\k\d\d\m\b\7\j\y\n\6\6\a\9\4\q\0\h\z\8\0\5\a\r\s\i\t\s\z\j\5\v\y\7\3\q\m\t\i\7\y\q\d\p\n\4\g\9\5\n\b\2\m\l\b\x\3\n\h\n\z\2\7\v\2\4\3\v\7\x\j\s\f\f\s\d\t\q\3\3\7\x\k\8\4\q\b\1\l\3\7\t\1\n\w\h\1\b\a\9\7\a\5\h\p\g\o\c\y\b\i\a\w\d\0\6\o\2\5\j\t\8\g\s\s\v\j\e\2\d\x\r\8\l\7\l\6\8\s\3\y\e\j\u\0\o\h\j\m\p\f\u\p\e\h\7\h\2\m\t\k\y\m\3\g\f\r\s\j\c\6\o\5\s\b\f\l\e\x\u\l\p\d\2\d\c\y\b\i\0\f\c\i\u\v\u\0\m\1\0\j\y\b\c\c\i\b\8\u\k\1\8\g\x\h\o\1\u\v\p\x\8\0\8\6\z\n\8\c\h\u\y\n\w\j\1\d\t\n\0\h\c\0\v\w\6\v\h\7\2\b\y\h\u\a\2\v\b\k\s\9\c\p\w\6\k\t\7\i\l\q\x\2\5\a\o\g\y\y\m\z\4\1\y\d\o\6\n\m\s\s\v\n\q\u\q\k\1\g\1\1\0\r\c\8\o\9\0\p\i\0\f\z\1\4\s\m\g\i\u\i\l\g\i\l\1\d\l\h\b\k\b\8\x\0\8\j\a\e\a\t\v\s\p\k\l\h\w\4\t\7\0\0\8\2\3\b\y\w\7\0\3\w\z\5\1\1\4\h\g\5\s\3\0\b\8\0\q\m\z\x\i\d\p\8\n\b\8\e\a\1\6\w\w\9\h\y\9\3\3 ]] 00:06:48.053 10:50:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.053 10:50:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:48.053 [2024-11-15 10:50:34.886799] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:48.053 [2024-11-15 10:50:34.886917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60482 ] 00:06:48.317 [2024-11-15 10:50:35.034542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.317 [2024-11-15 10:50:35.077933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.317 [2024-11-15 10:50:35.145393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.576  [2024-11-15T10:50:35.437Z] Copying: 512/512 [B] (average 500 kBps) 00:06:48.576 00:06:48.576 10:50:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wixfdl3r3xdjwarb75e22ar7y5xb9bhj4dp9ci305g68b6jaeu5jvaqvn1hcl8419hbgmksfbr27ai2rtgy2ibcm7e84oxhicdk40dpi2gfnlujrckddmb7jyn66a94q0hz805arsitszj5vy73qmti7yqdpn4g95nb2mlbx3nhnz27v243v7xjsffsdtq337xk84qb1l37t1nwh1ba97a5hpgocybiawd06o25jt8gssvje2dxr8l7l68s3yeju0ohjmpfupeh7h2mtkym3gfrsjc6o5sbflexulpd2dcybi0fciuvu0m10jybccib8uk18gxho1uvpx8086zn8chuynwj1dtn0hc0vw6vh72byhua2vbks9cpw6kt7ilqx25aogyymz41ydo6nmssvnquqk1g110rc8o90pi0fz14smgiuilgil1dlhbkb8x08jaeatvspklhw4t700823byw703wz5114hg5s30b80qmzxidp8nb8ea16ww9hy933 == \w\i\x\f\d\l\3\r\3\x\d\j\w\a\r\b\7\5\e\2\2\a\r\7\y\5\x\b\9\b\h\j\4\d\p\9\c\i\3\0\5\g\6\8\b\6\j\a\e\u\5\j\v\a\q\v\n\1\h\c\l\8\4\1\9\h\b\g\m\k\s\f\b\r\2\7\a\i\2\r\t\g\y\2\i\b\c\m\7\e\8\4\o\x\h\i\c\d\k\4\0\d\p\i\2\g\f\n\l\u\j\r\c\k\d\d\m\b\7\j\y\n\6\6\a\9\4\q\0\h\z\8\0\5\a\r\s\i\t\s\z\j\5\v\y\7\3\q\m\t\i\7\y\q\d\p\n\4\g\9\5\n\b\2\m\l\b\x\3\n\h\n\z\2\7\v\2\4\3\v\7\x\j\s\f\f\s\d\t\q\3\3\7\x\k\8\4\q\b\1\l\3\7\t\1\n\w\h\1\b\a\9\7\a\5\h\p\g\o\c\y\b\i\a\w\d\0\6\o\2\5\j\t\8\g\s\s\v\j\e\2\d\x\r\8\l\7\l\6\8\s\3\y\e\j\u\0\o\h\j\m\p\f\u\p\e\h\7\h\2\m\t\k\y\m\3\g\f\r\s\j\c\6\o\5\s\b\f\l\e\x\u\l\p\d\2\d\c\y\b\i\0\f\c\i\u\v\u\0\m\1\0\j\y\b\c\c\i\b\8\u\k\1\8\g\x\h\o\1\u\v\p\x\8\0\8\6\z\n\8\c\h\u\y\n\w\j\1\d\t\n\0\h\c\0\v\w\6\v\h\7\2\b\y\h\u\a\2\v\b\k\s\9\c\p\w\6\k\t\7\i\l\q\x\2\5\a\o\g\y\y\m\z\4\1\y\d\o\6\n\m\s\s\v\n\q\u\q\k\1\g\1\1\0\r\c\8\o\9\0\p\i\0\f\z\1\4\s\m\g\i\u\i\l\g\i\l\1\d\l\h\b\k\b\8\x\0\8\j\a\e\a\t\v\s\p\k\l\h\w\4\t\7\0\0\8\2\3\b\y\w\7\0\3\w\z\5\1\1\4\h\g\5\s\3\0\b\8\0\q\m\z\x\i\d\p\8\n\b\8\e\a\1\6\w\w\9\h\y\9\3\3 ]] 00:06:48.576 10:50:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.576 10:50:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:48.836 [2024-11-15 10:50:35.477128] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:48.836 [2024-11-15 10:50:35.477215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60489 ] 00:06:48.836 [2024-11-15 10:50:35.615209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.836 [2024-11-15 10:50:35.660386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.095 [2024-11-15 10:50:35.729076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.095  [2024-11-15T10:50:36.215Z] Copying: 512/512 [B] (average 500 kBps) 00:06:49.354 00:06:49.354 10:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wixfdl3r3xdjwarb75e22ar7y5xb9bhj4dp9ci305g68b6jaeu5jvaqvn1hcl8419hbgmksfbr27ai2rtgy2ibcm7e84oxhicdk40dpi2gfnlujrckddmb7jyn66a94q0hz805arsitszj5vy73qmti7yqdpn4g95nb2mlbx3nhnz27v243v7xjsffsdtq337xk84qb1l37t1nwh1ba97a5hpgocybiawd06o25jt8gssvje2dxr8l7l68s3yeju0ohjmpfupeh7h2mtkym3gfrsjc6o5sbflexulpd2dcybi0fciuvu0m10jybccib8uk18gxho1uvpx8086zn8chuynwj1dtn0hc0vw6vh72byhua2vbks9cpw6kt7ilqx25aogyymz41ydo6nmssvnquqk1g110rc8o90pi0fz14smgiuilgil1dlhbkb8x08jaeatvspklhw4t700823byw703wz5114hg5s30b80qmzxidp8nb8ea16ww9hy933 == \w\i\x\f\d\l\3\r\3\x\d\j\w\a\r\b\7\5\e\2\2\a\r\7\y\5\x\b\9\b\h\j\4\d\p\9\c\i\3\0\5\g\6\8\b\6\j\a\e\u\5\j\v\a\q\v\n\1\h\c\l\8\4\1\9\h\b\g\m\k\s\f\b\r\2\7\a\i\2\r\t\g\y\2\i\b\c\m\7\e\8\4\o\x\h\i\c\d\k\4\0\d\p\i\2\g\f\n\l\u\j\r\c\k\d\d\m\b\7\j\y\n\6\6\a\9\4\q\0\h\z\8\0\5\a\r\s\i\t\s\z\j\5\v\y\7\3\q\m\t\i\7\y\q\d\p\n\4\g\9\5\n\b\2\m\l\b\x\3\n\h\n\z\2\7\v\2\4\3\v\7\x\j\s\f\f\s\d\t\q\3\3\7\x\k\8\4\q\b\1\l\3\7\t\1\n\w\h\1\b\a\9\7\a\5\h\p\g\o\c\y\b\i\a\w\d\0\6\o\2\5\j\t\8\g\s\s\v\j\e\2\d\x\r\8\l\7\l\6\8\s\3\y\e\j\u\0\o\h\j\m\p\f\u\p\e\h\7\h\2\m\t\k\y\m\3\g\f\r\s\j\c\6\o\5\s\b\f\l\e\x\u\l\p\d\2\d\c\y\b\i\0\f\c\i\u\v\u\0\m\1\0\j\y\b\c\c\i\b\8\u\k\1\8\g\x\h\o\1\u\v\p\x\8\0\8\6\z\n\8\c\h\u\y\n\w\j\1\d\t\n\0\h\c\0\v\w\6\v\h\7\2\b\y\h\u\a\2\v\b\k\s\9\c\p\w\6\k\t\7\i\l\q\x\2\5\a\o\g\y\y\m\z\4\1\y\d\o\6\n\m\s\s\v\n\q\u\q\k\1\g\1\1\0\r\c\8\o\9\0\p\i\0\f\z\1\4\s\m\g\i\u\i\l\g\i\l\1\d\l\h\b\k\b\8\x\0\8\j\a\e\a\t\v\s\p\k\l\h\w\4\t\7\0\0\8\2\3\b\y\w\7\0\3\w\z\5\1\1\4\h\g\5\s\3\0\b\8\0\q\m\z\x\i\d\p\8\n\b\8\e\a\1\6\w\w\9\h\y\9\3\3 ]] 00:06:49.354 10:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.354 10:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:49.354 [2024-11-15 10:50:36.085678] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:49.354 [2024-11-15 10:50:36.085799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60502 ] 00:06:49.613 [2024-11-15 10:50:36.223357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.613 [2024-11-15 10:50:36.267283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.613 [2024-11-15 10:50:36.334853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.613  [2024-11-15T10:50:36.734Z] Copying: 512/512 [B] (average 250 kBps) 00:06:49.873 00:06:49.873 10:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wixfdl3r3xdjwarb75e22ar7y5xb9bhj4dp9ci305g68b6jaeu5jvaqvn1hcl8419hbgmksfbr27ai2rtgy2ibcm7e84oxhicdk40dpi2gfnlujrckddmb7jyn66a94q0hz805arsitszj5vy73qmti7yqdpn4g95nb2mlbx3nhnz27v243v7xjsffsdtq337xk84qb1l37t1nwh1ba97a5hpgocybiawd06o25jt8gssvje2dxr8l7l68s3yeju0ohjmpfupeh7h2mtkym3gfrsjc6o5sbflexulpd2dcybi0fciuvu0m10jybccib8uk18gxho1uvpx8086zn8chuynwj1dtn0hc0vw6vh72byhua2vbks9cpw6kt7ilqx25aogyymz41ydo6nmssvnquqk1g110rc8o90pi0fz14smgiuilgil1dlhbkb8x08jaeatvspklhw4t700823byw703wz5114hg5s30b80qmzxidp8nb8ea16ww9hy933 == \w\i\x\f\d\l\3\r\3\x\d\j\w\a\r\b\7\5\e\2\2\a\r\7\y\5\x\b\9\b\h\j\4\d\p\9\c\i\3\0\5\g\6\8\b\6\j\a\e\u\5\j\v\a\q\v\n\1\h\c\l\8\4\1\9\h\b\g\m\k\s\f\b\r\2\7\a\i\2\r\t\g\y\2\i\b\c\m\7\e\8\4\o\x\h\i\c\d\k\4\0\d\p\i\2\g\f\n\l\u\j\r\c\k\d\d\m\b\7\j\y\n\6\6\a\9\4\q\0\h\z\8\0\5\a\r\s\i\t\s\z\j\5\v\y\7\3\q\m\t\i\7\y\q\d\p\n\4\g\9\5\n\b\2\m\l\b\x\3\n\h\n\z\2\7\v\2\4\3\v\7\x\j\s\f\f\s\d\t\q\3\3\7\x\k\8\4\q\b\1\l\3\7\t\1\n\w\h\1\b\a\9\7\a\5\h\p\g\o\c\y\b\i\a\w\d\0\6\o\2\5\j\t\8\g\s\s\v\j\e\2\d\x\r\8\l\7\l\6\8\s\3\y\e\j\u\0\o\h\j\m\p\f\u\p\e\h\7\h\2\m\t\k\y\m\3\g\f\r\s\j\c\6\o\5\s\b\f\l\e\x\u\l\p\d\2\d\c\y\b\i\0\f\c\i\u\v\u\0\m\1\0\j\y\b\c\c\i\b\8\u\k\1\8\g\x\h\o\1\u\v\p\x\8\0\8\6\z\n\8\c\h\u\y\n\w\j\1\d\t\n\0\h\c\0\v\w\6\v\h\7\2\b\y\h\u\a\2\v\b\k\s\9\c\p\w\6\k\t\7\i\l\q\x\2\5\a\o\g\y\y\m\z\4\1\y\d\o\6\n\m\s\s\v\n\q\u\q\k\1\g\1\1\0\r\c\8\o\9\0\p\i\0\f\z\1\4\s\m\g\i\u\i\l\g\i\l\1\d\l\h\b\k\b\8\x\0\8\j\a\e\a\t\v\s\p\k\l\h\w\4\t\7\0\0\8\2\3\b\y\w\7\0\3\w\z\5\1\1\4\h\g\5\s\3\0\b\8\0\q\m\z\x\i\d\p\8\n\b\8\e\a\1\6\w\w\9\h\y\9\3\3 ]] 00:06:49.873 00:06:49.873 real 0m4.892s 00:06:49.873 user 0m2.659s 00:06:49.873 sys 0m1.246s 00:06:49.873 10:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.873 10:50:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:49.873 ************************************ 00:06:49.873 END TEST dd_flags_misc_forced_aio 00:06:49.873 ************************************ 00:06:49.873 10:50:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:49.873 10:50:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:49.873 10:50:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:49.873 00:06:49.873 real 0m22.306s 00:06:49.873 user 0m11.166s 00:06:49.873 sys 0m7.517s 00:06:49.873 10:50:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.873 10:50:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:49.873 ************************************ 00:06:49.873 END TEST spdk_dd_posix 00:06:49.873 ************************************ 00:06:49.873 10:50:36 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:49.873 10:50:36 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.873 10:50:36 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.873 10:50:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:49.873 ************************************ 00:06:49.873 START TEST spdk_dd_malloc 00:06:49.873 ************************************ 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:50.133 * Looking for test storage... 00:06:50.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:50.133 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.134 --rc genhtml_branch_coverage=1 00:06:50.134 --rc genhtml_function_coverage=1 00:06:50.134 --rc genhtml_legend=1 00:06:50.134 --rc geninfo_all_blocks=1 00:06:50.134 --rc geninfo_unexecuted_blocks=1 00:06:50.134 00:06:50.134 ' 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.134 --rc genhtml_branch_coverage=1 00:06:50.134 --rc genhtml_function_coverage=1 00:06:50.134 --rc genhtml_legend=1 00:06:50.134 --rc geninfo_all_blocks=1 00:06:50.134 --rc geninfo_unexecuted_blocks=1 00:06:50.134 00:06:50.134 ' 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.134 --rc genhtml_branch_coverage=1 00:06:50.134 --rc genhtml_function_coverage=1 00:06:50.134 --rc genhtml_legend=1 00:06:50.134 --rc geninfo_all_blocks=1 00:06:50.134 --rc geninfo_unexecuted_blocks=1 00:06:50.134 00:06:50.134 ' 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.134 --rc genhtml_branch_coverage=1 00:06:50.134 --rc genhtml_function_coverage=1 00:06:50.134 --rc genhtml_legend=1 00:06:50.134 --rc geninfo_all_blocks=1 00:06:50.134 --rc geninfo_unexecuted_blocks=1 00:06:50.134 00:06:50.134 ' 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:50.134 ************************************ 00:06:50.134 START TEST dd_malloc_copy 00:06:50.134 ************************************ 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:50.134 10:50:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:50.134 [2024-11-15 10:50:36.965041] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:50.134 [2024-11-15 10:50:36.965868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60579 ] 00:06:50.134 { 00:06:50.134 "subsystems": [ 00:06:50.134 { 00:06:50.134 "subsystem": "bdev", 00:06:50.134 "config": [ 00:06:50.134 { 00:06:50.134 "params": { 00:06:50.134 "block_size": 512, 00:06:50.134 "num_blocks": 1048576, 00:06:50.134 "name": "malloc0" 00:06:50.134 }, 00:06:50.134 "method": "bdev_malloc_create" 00:06:50.134 }, 00:06:50.134 { 00:06:50.134 "params": { 00:06:50.134 "block_size": 512, 00:06:50.134 "num_blocks": 1048576, 00:06:50.134 "name": "malloc1" 00:06:50.134 }, 00:06:50.134 "method": "bdev_malloc_create" 00:06:50.134 }, 00:06:50.134 { 00:06:50.134 "method": "bdev_wait_for_examine" 00:06:50.134 } 00:06:50.134 ] 00:06:50.134 } 00:06:50.134 ] 00:06:50.134 } 00:06:50.393 [2024-11-15 10:50:37.110607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.393 [2024-11-15 10:50:37.160086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.393 [2024-11-15 10:50:37.228053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.300  [2024-11-15T10:50:39.729Z] Copying: 249/512 [MB] (249 MBps) [2024-11-15T10:50:39.729Z] Copying: 499/512 [MB] (250 MBps) [2024-11-15T10:50:40.664Z] Copying: 512/512 [MB] (average 249 MBps) 00:06:53.803 00:06:53.803 10:50:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:53.803 10:50:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:53.803 10:50:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:53.803 10:50:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.803 [2024-11-15 10:50:40.499102] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:53.803 [2024-11-15 10:50:40.499770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60626 ] 00:06:53.803 { 00:06:53.803 "subsystems": [ 00:06:53.803 { 00:06:53.803 "subsystem": "bdev", 00:06:53.803 "config": [ 00:06:53.803 { 00:06:53.803 "params": { 00:06:53.803 "block_size": 512, 00:06:53.803 "num_blocks": 1048576, 00:06:53.803 "name": "malloc0" 00:06:53.803 }, 00:06:53.803 "method": "bdev_malloc_create" 00:06:53.803 }, 00:06:53.803 { 00:06:53.803 "params": { 00:06:53.803 "block_size": 512, 00:06:53.803 "num_blocks": 1048576, 00:06:53.803 "name": "malloc1" 00:06:53.803 }, 00:06:53.803 "method": "bdev_malloc_create" 00:06:53.803 }, 00:06:53.803 { 00:06:53.803 "method": "bdev_wait_for_examine" 00:06:53.803 } 00:06:53.803 ] 00:06:53.803 } 00:06:53.803 ] 00:06:53.803 } 00:06:53.803 [2024-11-15 10:50:40.638758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.062 [2024-11-15 10:50:40.684628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.062 [2024-11-15 10:50:40.752449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.439  [2024-11-15T10:50:43.237Z] Copying: 250/512 [MB] (250 MBps) [2024-11-15T10:50:43.237Z] Copying: 501/512 [MB] (250 MBps) [2024-11-15T10:50:44.237Z] Copying: 512/512 [MB] (average 250 MBps) 00:06:57.376 00:06:57.376 00:06:57.376 real 0m7.054s 00:06:57.376 user 0m5.888s 00:06:57.376 sys 0m1.029s 00:06:57.376 10:50:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.376 10:50:43 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.376 ************************************ 00:06:57.376 END TEST dd_malloc_copy 00:06:57.376 ************************************ 00:06:57.376 00:06:57.376 real 0m7.274s 00:06:57.376 user 0m6.001s 00:06:57.376 sys 0m1.142s 00:06:57.376 10:50:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.376 ************************************ 00:06:57.376 END TEST spdk_dd_malloc 00:06:57.376 ************************************ 00:06:57.376 10:50:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:57.376 10:50:44 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:57.376 10:50:44 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:57.376 10:50:44 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.376 10:50:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:57.376 ************************************ 00:06:57.376 START TEST spdk_dd_bdev_to_bdev 00:06:57.376 ************************************ 00:06:57.376 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:57.376 * Looking for test storage... 00:06:57.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:57.376 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.376 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.376 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:57.635 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.636 --rc genhtml_branch_coverage=1 00:06:57.636 --rc genhtml_function_coverage=1 00:06:57.636 --rc genhtml_legend=1 00:06:57.636 --rc geninfo_all_blocks=1 00:06:57.636 --rc geninfo_unexecuted_blocks=1 00:06:57.636 00:06:57.636 ' 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.636 --rc genhtml_branch_coverage=1 00:06:57.636 --rc genhtml_function_coverage=1 00:06:57.636 --rc genhtml_legend=1 00:06:57.636 --rc geninfo_all_blocks=1 00:06:57.636 --rc geninfo_unexecuted_blocks=1 00:06:57.636 00:06:57.636 ' 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.636 --rc genhtml_branch_coverage=1 00:06:57.636 --rc genhtml_function_coverage=1 00:06:57.636 --rc genhtml_legend=1 00:06:57.636 --rc geninfo_all_blocks=1 00:06:57.636 --rc geninfo_unexecuted_blocks=1 00:06:57.636 00:06:57.636 ' 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.636 --rc genhtml_branch_coverage=1 00:06:57.636 --rc genhtml_function_coverage=1 00:06:57.636 --rc genhtml_legend=1 00:06:57.636 --rc geninfo_all_blocks=1 00:06:57.636 --rc geninfo_unexecuted_blocks=1 00:06:57.636 00:06:57.636 ' 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:57.636 ************************************ 00:06:57.636 START TEST dd_inflate_file 00:06:57.636 ************************************ 00:06:57.636 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:57.636 [2024-11-15 10:50:44.303044] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:57.636 [2024-11-15 10:50:44.303158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60739 ] 00:06:57.636 [2024-11-15 10:50:44.448515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.897 [2024-11-15 10:50:44.494438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.897 [2024-11-15 10:50:44.561825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.897  [2024-11-15T10:50:45.017Z] Copying: 64/64 [MB] (average 984 MBps) 00:06:58.156 00:06:58.156 00:06:58.156 real 0m0.661s 00:06:58.156 user 0m0.411s 00:06:58.156 sys 0m0.385s 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.156 ************************************ 00:06:58.156 END TEST dd_inflate_file 00:06:58.156 ************************************ 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:58.156 ************************************ 00:06:58.156 START TEST dd_copy_to_out_bdev 00:06:58.156 ************************************ 00:06:58.156 10:50:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:58.416 [2024-11-15 10:50:45.015222] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:58.416 [2024-11-15 10:50:45.015315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60777 ] 00:06:58.416 { 00:06:58.416 "subsystems": [ 00:06:58.416 { 00:06:58.416 "subsystem": "bdev", 00:06:58.416 "config": [ 00:06:58.416 { 00:06:58.416 "params": { 00:06:58.416 "trtype": "pcie", 00:06:58.416 "traddr": "0000:00:10.0", 00:06:58.416 "name": "Nvme0" 00:06:58.416 }, 00:06:58.416 "method": "bdev_nvme_attach_controller" 00:06:58.416 }, 00:06:58.416 { 00:06:58.416 "params": { 00:06:58.416 "trtype": "pcie", 00:06:58.416 "traddr": "0000:00:11.0", 00:06:58.416 "name": "Nvme1" 00:06:58.416 }, 00:06:58.416 "method": "bdev_nvme_attach_controller" 00:06:58.416 }, 00:06:58.416 { 00:06:58.416 "method": "bdev_wait_for_examine" 00:06:58.416 } 00:06:58.416 ] 00:06:58.416 } 00:06:58.416 ] 00:06:58.416 } 00:06:58.416 [2024-11-15 10:50:45.155136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.416 [2024-11-15 10:50:45.198690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.416 [2024-11-15 10:50:45.266691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.794  [2024-11-15T10:50:46.655Z] Copying: 55/64 [MB] (55 MBps) [2024-11-15T10:50:46.915Z] Copying: 64/64 [MB] (average 54 MBps) 00:07:00.054 00:07:00.054 00:07:00.054 real 0m1.934s 00:07:00.054 user 0m1.692s 00:07:00.054 sys 0m1.572s 00:07:00.054 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.054 ************************************ 00:07:00.054 END TEST dd_copy_to_out_bdev 00:07:00.054 ************************************ 00:07:00.054 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:00.313 ************************************ 00:07:00.313 START TEST dd_offset_magic 00:07:00.313 ************************************ 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:00.313 10:50:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:00.313 [2024-11-15 10:50:47.004716] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:00.313 [2024-11-15 10:50:47.004803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60823 ] 00:07:00.313 { 00:07:00.313 "subsystems": [ 00:07:00.313 { 00:07:00.313 "subsystem": "bdev", 00:07:00.314 "config": [ 00:07:00.314 { 00:07:00.314 "params": { 00:07:00.314 "trtype": "pcie", 00:07:00.314 "traddr": "0000:00:10.0", 00:07:00.314 "name": "Nvme0" 00:07:00.314 }, 00:07:00.314 "method": "bdev_nvme_attach_controller" 00:07:00.314 }, 00:07:00.314 { 00:07:00.314 "params": { 00:07:00.314 "trtype": "pcie", 00:07:00.314 "traddr": "0000:00:11.0", 00:07:00.314 "name": "Nvme1" 00:07:00.314 }, 00:07:00.314 "method": "bdev_nvme_attach_controller" 00:07:00.314 }, 00:07:00.314 { 00:07:00.314 "method": "bdev_wait_for_examine" 00:07:00.314 } 00:07:00.314 ] 00:07:00.314 } 00:07:00.314 ] 00:07:00.314 } 00:07:00.314 [2024-11-15 10:50:47.145399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.573 [2024-11-15 10:50:47.194622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.573 [2024-11-15 10:50:47.262831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.832  [2024-11-15T10:50:47.952Z] Copying: 65/65 [MB] (average 773 MBps) 00:07:01.091 00:07:01.091 10:50:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:01.091 10:50:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:01.091 10:50:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:01.091 10:50:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:01.091 [2024-11-15 10:50:47.869184] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:01.091 { 00:07:01.091 "subsystems": [ 00:07:01.091 { 00:07:01.091 "subsystem": "bdev", 00:07:01.091 "config": [ 00:07:01.091 { 00:07:01.091 "params": { 00:07:01.091 "trtype": "pcie", 00:07:01.091 "traddr": "0000:00:10.0", 00:07:01.091 "name": "Nvme0" 00:07:01.091 }, 00:07:01.091 "method": "bdev_nvme_attach_controller" 00:07:01.091 }, 00:07:01.091 { 00:07:01.091 "params": { 00:07:01.091 "trtype": "pcie", 00:07:01.091 "traddr": "0000:00:11.0", 00:07:01.091 "name": "Nvme1" 00:07:01.091 }, 00:07:01.091 "method": "bdev_nvme_attach_controller" 00:07:01.091 }, 00:07:01.091 { 00:07:01.091 "method": "bdev_wait_for_examine" 00:07:01.091 } 00:07:01.091 ] 00:07:01.091 } 00:07:01.091 ] 00:07:01.091 } 00:07:01.091 [2024-11-15 10:50:47.869275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60837 ] 00:07:01.351 [2024-11-15 10:50:48.014392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.351 [2024-11-15 10:50:48.058096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.351 [2024-11-15 10:50:48.126122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.610  [2024-11-15T10:50:48.730Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:01.869 00:07:01.869 10:50:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:01.869 10:50:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:01.869 10:50:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:01.869 10:50:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:01.869 10:50:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:01.869 10:50:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:01.869 10:50:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:01.869 [2024-11-15 10:50:48.615623] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:01.870 [2024-11-15 10:50:48.616219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60859 ] 00:07:01.870 { 00:07:01.870 "subsystems": [ 00:07:01.870 { 00:07:01.870 "subsystem": "bdev", 00:07:01.870 "config": [ 00:07:01.870 { 00:07:01.870 "params": { 00:07:01.870 "trtype": "pcie", 00:07:01.870 "traddr": "0000:00:10.0", 00:07:01.870 "name": "Nvme0" 00:07:01.870 }, 00:07:01.870 "method": "bdev_nvme_attach_controller" 00:07:01.870 }, 00:07:01.870 { 00:07:01.870 "params": { 00:07:01.870 "trtype": "pcie", 00:07:01.870 "traddr": "0000:00:11.0", 00:07:01.870 "name": "Nvme1" 00:07:01.870 }, 00:07:01.870 "method": "bdev_nvme_attach_controller" 00:07:01.870 }, 00:07:01.870 { 00:07:01.870 "method": "bdev_wait_for_examine" 00:07:01.870 } 00:07:01.870 ] 00:07:01.870 } 00:07:01.870 ] 00:07:01.870 } 00:07:02.129 [2024-11-15 10:50:48.762727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.129 [2024-11-15 10:50:48.807517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.129 [2024-11-15 10:50:48.877786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.388  [2024-11-15T10:50:49.508Z] Copying: 65/65 [MB] (average 915 MBps) 00:07:02.647 00:07:02.647 10:50:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:02.647 10:50:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:02.647 10:50:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:02.647 10:50:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:02.647 [2024-11-15 10:50:49.487713] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:02.648 [2024-11-15 10:50:49.487806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60874 ] 00:07:02.648 { 00:07:02.648 "subsystems": [ 00:07:02.648 { 00:07:02.648 "subsystem": "bdev", 00:07:02.648 "config": [ 00:07:02.648 { 00:07:02.648 "params": { 00:07:02.648 "trtype": "pcie", 00:07:02.648 "traddr": "0000:00:10.0", 00:07:02.648 "name": "Nvme0" 00:07:02.648 }, 00:07:02.648 "method": "bdev_nvme_attach_controller" 00:07:02.648 }, 00:07:02.648 { 00:07:02.648 "params": { 00:07:02.648 "trtype": "pcie", 00:07:02.648 "traddr": "0000:00:11.0", 00:07:02.648 "name": "Nvme1" 00:07:02.648 }, 00:07:02.648 "method": "bdev_nvme_attach_controller" 00:07:02.648 }, 00:07:02.648 { 00:07:02.648 "method": "bdev_wait_for_examine" 00:07:02.648 } 00:07:02.648 ] 00:07:02.648 } 00:07:02.648 ] 00:07:02.648 } 00:07:02.906 [2024-11-15 10:50:49.631277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.907 [2024-11-15 10:50:49.678280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.907 [2024-11-15 10:50:49.748002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.166  [2024-11-15T10:50:50.286Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:03.425 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:03.425 00:07:03.425 real 0m3.233s 00:07:03.425 user 0m2.352s 00:07:03.425 sys 0m1.055s 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:03.425 ************************************ 00:07:03.425 END TEST dd_offset_magic 00:07:03.425 ************************************ 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:03.425 10:50:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:03.685 [2024-11-15 10:50:50.292948] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:03.685 [2024-11-15 10:50:50.293075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60911 ] 00:07:03.685 { 00:07:03.685 "subsystems": [ 00:07:03.685 { 00:07:03.685 "subsystem": "bdev", 00:07:03.685 "config": [ 00:07:03.685 { 00:07:03.685 "params": { 00:07:03.685 "trtype": "pcie", 00:07:03.685 "traddr": "0000:00:10.0", 00:07:03.685 "name": "Nvme0" 00:07:03.685 }, 00:07:03.685 "method": "bdev_nvme_attach_controller" 00:07:03.685 }, 00:07:03.685 { 00:07:03.685 "params": { 00:07:03.685 "trtype": "pcie", 00:07:03.685 "traddr": "0000:00:11.0", 00:07:03.685 "name": "Nvme1" 00:07:03.685 }, 00:07:03.685 "method": "bdev_nvme_attach_controller" 00:07:03.685 }, 00:07:03.685 { 00:07:03.685 "method": "bdev_wait_for_examine" 00:07:03.685 } 00:07:03.685 ] 00:07:03.685 } 00:07:03.685 ] 00:07:03.685 } 00:07:03.685 [2024-11-15 10:50:50.438393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.685 [2024-11-15 10:50:50.485575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.945 [2024-11-15 10:50:50.557690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.945  [2024-11-15T10:50:51.065Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:04.204 00:07:04.204 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:04.204 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:04.204 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:04.204 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:04.204 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:04.204 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:04.204 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:04.204 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:04.204 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:04.204 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.204 [2024-11-15 10:50:51.046295] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:04.204 [2024-11-15 10:50:51.046695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60932 ] 00:07:04.463 { 00:07:04.463 "subsystems": [ 00:07:04.463 { 00:07:04.463 "subsystem": "bdev", 00:07:04.463 "config": [ 00:07:04.463 { 00:07:04.463 "params": { 00:07:04.463 "trtype": "pcie", 00:07:04.463 "traddr": "0000:00:10.0", 00:07:04.463 "name": "Nvme0" 00:07:04.463 }, 00:07:04.463 "method": "bdev_nvme_attach_controller" 00:07:04.463 }, 00:07:04.463 { 00:07:04.463 "params": { 00:07:04.463 "trtype": "pcie", 00:07:04.463 "traddr": "0000:00:11.0", 00:07:04.463 "name": "Nvme1" 00:07:04.463 }, 00:07:04.463 "method": "bdev_nvme_attach_controller" 00:07:04.463 }, 00:07:04.463 { 00:07:04.463 "method": "bdev_wait_for_examine" 00:07:04.463 } 00:07:04.464 ] 00:07:04.464 } 00:07:04.464 ] 00:07:04.464 } 00:07:04.464 [2024-11-15 10:50:51.185805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.464 [2024-11-15 10:50:51.237250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.464 [2024-11-15 10:50:51.306898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.722  [2024-11-15T10:50:51.843Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:07:04.982 00:07:04.982 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:04.982 ************************************ 00:07:04.982 END TEST spdk_dd_bdev_to_bdev 00:07:04.982 ************************************ 00:07:04.982 00:07:04.982 real 0m7.698s 00:07:04.982 user 0m5.679s 00:07:04.982 sys 0m3.823s 00:07:04.982 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.982 10:50:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.982 10:50:51 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:04.982 10:50:51 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:04.982 10:50:51 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.982 10:50:51 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.982 10:50:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:04.982 ************************************ 00:07:04.982 START TEST spdk_dd_uring 00:07:04.982 ************************************ 00:07:04.982 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:05.242 * Looking for test storage... 00:07:05.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.242 --rc genhtml_branch_coverage=1 00:07:05.242 --rc genhtml_function_coverage=1 00:07:05.242 --rc genhtml_legend=1 00:07:05.242 --rc geninfo_all_blocks=1 00:07:05.242 --rc geninfo_unexecuted_blocks=1 00:07:05.242 00:07:05.242 ' 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.242 --rc genhtml_branch_coverage=1 00:07:05.242 --rc genhtml_function_coverage=1 00:07:05.242 --rc genhtml_legend=1 00:07:05.242 --rc geninfo_all_blocks=1 00:07:05.242 --rc geninfo_unexecuted_blocks=1 00:07:05.242 00:07:05.242 ' 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.242 --rc genhtml_branch_coverage=1 00:07:05.242 --rc genhtml_function_coverage=1 00:07:05.242 --rc genhtml_legend=1 00:07:05.242 --rc geninfo_all_blocks=1 00:07:05.242 --rc geninfo_unexecuted_blocks=1 00:07:05.242 00:07:05.242 ' 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.242 --rc genhtml_branch_coverage=1 00:07:05.242 --rc genhtml_function_coverage=1 00:07:05.242 --rc genhtml_legend=1 00:07:05.242 --rc geninfo_all_blocks=1 00:07:05.242 --rc geninfo_unexecuted_blocks=1 00:07:05.242 00:07:05.242 ' 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.242 10:50:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:05.242 ************************************ 00:07:05.242 START TEST dd_uring_copy 00:07:05.242 ************************************ 00:07:05.242 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:07:05.242 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:05.242 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:05.242 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:05.242 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:05.242 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:05.242 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:05.242 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:05.242 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:05.242 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:05.242 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=9lq0g5g0g7qdaa3mc6t39bz7yfwb8ogqoyp10j4shaqqcuyz0z6f3wc3cp8vwvgjrj1m4xzzu9m6s667nxhfsnr6m555qhdf2k4055dx2cv9zafxnzuxend17y5zy24q9z4wzjnfqrb7fr8ybfrav7ms073952fkrqdao4zn0lk41730qkr8ac1rbp3dhsszeijfuyxl4mkyltnkpmezqb6gu1cydxeujj5m1rrinoqby6ot3zmawkhhrbil9ptnmvl0fsu8dz46xmo4byr66gvhkm8nnvnh9tazn734y8emlkbj1xhkhgwkci3mbb11zi4wqsogbx99npp4teikfuvhwp6n4veqfja5037d53rc05ir0o1xfgtel0di1dgwavdx5sy32bi4a2l1qqt9y4vjvzl1jqdfiow3jyjpd6ukhaq0cz2fwyg7w0qi977rfp1g1m14fpct83iv7gbpabn8p0nrhs65bs05a00wjrzx7ex8t7xnwwzpja1w95xqdrruizb1jelgbk6pky4q636k2pii1hbe63s6ilyy7qs187wzqb3rp695bhcoa7uy7frah9et37wx8cw3mzekzcz0jnpd8t7j4c9p90s8q3sp513tbfzq6p9nmsaoepwnmnvy8ryb84ix66niokqoz9nv5q9n4lpyqus1sey3zzp43vbn02cztk927qcoof2d1lhc0f5g1qxtl0hee789ie1lmr6q52q28yrkgggy8omd1kwxaztdyaw2bgbsk0881am1yhj23gvefr6z5c8jdodgu0kxnrkrt16ixvr00gb3blnfjypupdbsb8itgqo6nhqossnxmwb38nnx8t31d664cgyueodm7ne8ppse6exhr9dsqnd2xlh5lrxc4o49sx6vqtq3qgq2nmu2plhpg8u2z8vbtg0meoh9otepkxbzzzp769urpusgkuyhk68m3m23acwt7munv5qldkwqtqp3t132sonxs1djcpe3ptrasqevl787h5w92m6jxlv9 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 9lq0g5g0g7qdaa3mc6t39bz7yfwb8ogqoyp10j4shaqqcuyz0z6f3wc3cp8vwvgjrj1m4xzzu9m6s667nxhfsnr6m555qhdf2k4055dx2cv9zafxnzuxend17y5zy24q9z4wzjnfqrb7fr8ybfrav7ms073952fkrqdao4zn0lk41730qkr8ac1rbp3dhsszeijfuyxl4mkyltnkpmezqb6gu1cydxeujj5m1rrinoqby6ot3zmawkhhrbil9ptnmvl0fsu8dz46xmo4byr66gvhkm8nnvnh9tazn734y8emlkbj1xhkhgwkci3mbb11zi4wqsogbx99npp4teikfuvhwp6n4veqfja5037d53rc05ir0o1xfgtel0di1dgwavdx5sy32bi4a2l1qqt9y4vjvzl1jqdfiow3jyjpd6ukhaq0cz2fwyg7w0qi977rfp1g1m14fpct83iv7gbpabn8p0nrhs65bs05a00wjrzx7ex8t7xnwwzpja1w95xqdrruizb1jelgbk6pky4q636k2pii1hbe63s6ilyy7qs187wzqb3rp695bhcoa7uy7frah9et37wx8cw3mzekzcz0jnpd8t7j4c9p90s8q3sp513tbfzq6p9nmsaoepwnmnvy8ryb84ix66niokqoz9nv5q9n4lpyqus1sey3zzp43vbn02cztk927qcoof2d1lhc0f5g1qxtl0hee789ie1lmr6q52q28yrkgggy8omd1kwxaztdyaw2bgbsk0881am1yhj23gvefr6z5c8jdodgu0kxnrkrt16ixvr00gb3blnfjypupdbsb8itgqo6nhqossnxmwb38nnx8t31d664cgyueodm7ne8ppse6exhr9dsqnd2xlh5lrxc4o49sx6vqtq3qgq2nmu2plhpg8u2z8vbtg0meoh9otepkxbzzzp769urpusgkuyhk68m3m23acwt7munv5qldkwqtqp3t132sonxs1djcpe3ptrasqevl787h5w92m6jxlv9 00:07:05.243 10:50:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:05.243 [2024-11-15 10:50:52.097241] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:05.243 [2024-11-15 10:50:52.097333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61011 ] 00:07:05.502 [2024-11-15 10:50:52.239291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.502 [2024-11-15 10:50:52.286870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.502 [2024-11-15 10:50:52.355846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.439  [2024-11-15T10:50:53.559Z] Copying: 511/511 [MB] (average 1207 MBps) 00:07:06.698 00:07:06.698 10:50:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:06.698 10:50:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:06.698 10:50:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:06.698 10:50:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:06.957 { 00:07:06.957 "subsystems": [ 00:07:06.957 { 00:07:06.957 "subsystem": "bdev", 00:07:06.957 "config": [ 00:07:06.957 { 00:07:06.957 "params": { 00:07:06.957 "block_size": 512, 00:07:06.957 "num_blocks": 1048576, 00:07:06.957 "name": "malloc0" 00:07:06.957 }, 00:07:06.957 "method": "bdev_malloc_create" 00:07:06.957 }, 00:07:06.957 { 00:07:06.957 "params": { 00:07:06.957 "filename": "/dev/zram1", 00:07:06.957 "name": "uring0" 00:07:06.957 }, 00:07:06.957 "method": "bdev_uring_create" 00:07:06.957 }, 00:07:06.957 { 00:07:06.957 "method": "bdev_wait_for_examine" 00:07:06.957 } 00:07:06.957 ] 00:07:06.957 } 00:07:06.957 ] 00:07:06.957 } 00:07:06.957 [2024-11-15 10:50:53.589240] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:06.958 [2024-11-15 10:50:53.589350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61029 ] 00:07:06.958 [2024-11-15 10:50:53.734236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.958 [2024-11-15 10:50:53.790818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.216 [2024-11-15 10:50:53.860092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.595  [2024-11-15T10:50:56.022Z] Copying: 270/512 [MB] (270 MBps) [2024-11-15T10:50:56.591Z] Copying: 512/512 [MB] (average 270 MBps) 00:07:09.730 00:07:09.730 10:50:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:09.730 10:50:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:09.730 10:50:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:09.730 10:50:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.730 { 00:07:09.730 "subsystems": [ 00:07:09.730 { 00:07:09.730 "subsystem": "bdev", 00:07:09.730 "config": [ 00:07:09.730 { 00:07:09.730 "params": { 00:07:09.730 "block_size": 512, 00:07:09.730 "num_blocks": 1048576, 00:07:09.730 "name": "malloc0" 00:07:09.730 }, 00:07:09.730 "method": "bdev_malloc_create" 00:07:09.730 }, 00:07:09.730 { 00:07:09.730 "params": { 00:07:09.730 "filename": "/dev/zram1", 00:07:09.730 "name": "uring0" 00:07:09.730 }, 00:07:09.730 "method": "bdev_uring_create" 00:07:09.730 }, 00:07:09.730 { 00:07:09.730 "method": "bdev_wait_for_examine" 00:07:09.730 } 00:07:09.730 ] 00:07:09.730 } 00:07:09.730 ] 00:07:09.730 } 00:07:09.730 [2024-11-15 10:50:56.573468] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:09.730 [2024-11-15 10:50:56.574041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61073 ] 00:07:09.989 [2024-11-15 10:50:56.710148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.989 [2024-11-15 10:50:56.762198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.989 [2024-11-15 10:50:56.831147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.391  [2024-11-15T10:50:59.189Z] Copying: 183/512 [MB] (183 MBps) [2024-11-15T10:51:00.126Z] Copying: 374/512 [MB] (191 MBps) [2024-11-15T10:51:00.385Z] Copying: 512/512 [MB] (average 183 MBps) 00:07:13.524 00:07:13.524 10:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:13.524 10:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 9lq0g5g0g7qdaa3mc6t39bz7yfwb8ogqoyp10j4shaqqcuyz0z6f3wc3cp8vwvgjrj1m4xzzu9m6s667nxhfsnr6m555qhdf2k4055dx2cv9zafxnzuxend17y5zy24q9z4wzjnfqrb7fr8ybfrav7ms073952fkrqdao4zn0lk41730qkr8ac1rbp3dhsszeijfuyxl4mkyltnkpmezqb6gu1cydxeujj5m1rrinoqby6ot3zmawkhhrbil9ptnmvl0fsu8dz46xmo4byr66gvhkm8nnvnh9tazn734y8emlkbj1xhkhgwkci3mbb11zi4wqsogbx99npp4teikfuvhwp6n4veqfja5037d53rc05ir0o1xfgtel0di1dgwavdx5sy32bi4a2l1qqt9y4vjvzl1jqdfiow3jyjpd6ukhaq0cz2fwyg7w0qi977rfp1g1m14fpct83iv7gbpabn8p0nrhs65bs05a00wjrzx7ex8t7xnwwzpja1w95xqdrruizb1jelgbk6pky4q636k2pii1hbe63s6ilyy7qs187wzqb3rp695bhcoa7uy7frah9et37wx8cw3mzekzcz0jnpd8t7j4c9p90s8q3sp513tbfzq6p9nmsaoepwnmnvy8ryb84ix66niokqoz9nv5q9n4lpyqus1sey3zzp43vbn02cztk927qcoof2d1lhc0f5g1qxtl0hee789ie1lmr6q52q28yrkgggy8omd1kwxaztdyaw2bgbsk0881am1yhj23gvefr6z5c8jdodgu0kxnrkrt16ixvr00gb3blnfjypupdbsb8itgqo6nhqossnxmwb38nnx8t31d664cgyueodm7ne8ppse6exhr9dsqnd2xlh5lrxc4o49sx6vqtq3qgq2nmu2plhpg8u2z8vbtg0meoh9otepkxbzzzp769urpusgkuyhk68m3m23acwt7munv5qldkwqtqp3t132sonxs1djcpe3ptrasqevl787h5w92m6jxlv9 == \9\l\q\0\g\5\g\0\g\7\q\d\a\a\3\m\c\6\t\3\9\b\z\7\y\f\w\b\8\o\g\q\o\y\p\1\0\j\4\s\h\a\q\q\c\u\y\z\0\z\6\f\3\w\c\3\c\p\8\v\w\v\g\j\r\j\1\m\4\x\z\z\u\9\m\6\s\6\6\7\n\x\h\f\s\n\r\6\m\5\5\5\q\h\d\f\2\k\4\0\5\5\d\x\2\c\v\9\z\a\f\x\n\z\u\x\e\n\d\1\7\y\5\z\y\2\4\q\9\z\4\w\z\j\n\f\q\r\b\7\f\r\8\y\b\f\r\a\v\7\m\s\0\7\3\9\5\2\f\k\r\q\d\a\o\4\z\n\0\l\k\4\1\7\3\0\q\k\r\8\a\c\1\r\b\p\3\d\h\s\s\z\e\i\j\f\u\y\x\l\4\m\k\y\l\t\n\k\p\m\e\z\q\b\6\g\u\1\c\y\d\x\e\u\j\j\5\m\1\r\r\i\n\o\q\b\y\6\o\t\3\z\m\a\w\k\h\h\r\b\i\l\9\p\t\n\m\v\l\0\f\s\u\8\d\z\4\6\x\m\o\4\b\y\r\6\6\g\v\h\k\m\8\n\n\v\n\h\9\t\a\z\n\7\3\4\y\8\e\m\l\k\b\j\1\x\h\k\h\g\w\k\c\i\3\m\b\b\1\1\z\i\4\w\q\s\o\g\b\x\9\9\n\p\p\4\t\e\i\k\f\u\v\h\w\p\6\n\4\v\e\q\f\j\a\5\0\3\7\d\5\3\r\c\0\5\i\r\0\o\1\x\f\g\t\e\l\0\d\i\1\d\g\w\a\v\d\x\5\s\y\3\2\b\i\4\a\2\l\1\q\q\t\9\y\4\v\j\v\z\l\1\j\q\d\f\i\o\w\3\j\y\j\p\d\6\u\k\h\a\q\0\c\z\2\f\w\y\g\7\w\0\q\i\9\7\7\r\f\p\1\g\1\m\1\4\f\p\c\t\8\3\i\v\7\g\b\p\a\b\n\8\p\0\n\r\h\s\6\5\b\s\0\5\a\0\0\w\j\r\z\x\7\e\x\8\t\7\x\n\w\w\z\p\j\a\1\w\9\5\x\q\d\r\r\u\i\z\b\1\j\e\l\g\b\k\6\p\k\y\4\q\6\3\6\k\2\p\i\i\1\h\b\e\6\3\s\6\i\l\y\y\7\q\s\1\8\7\w\z\q\b\3\r\p\6\9\5\b\h\c\o\a\7\u\y\7\f\r\a\h\9\e\t\3\7\w\x\8\c\w\3\m\z\e\k\z\c\z\0\j\n\p\d\8\t\7\j\4\c\9\p\9\0\s\8\q\3\s\p\5\1\3\t\b\f\z\q\6\p\9\n\m\s\a\o\e\p\w\n\m\n\v\y\8\r\y\b\8\4\i\x\6\6\n\i\o\k\q\o\z\9\n\v\5\q\9\n\4\l\p\y\q\u\s\1\s\e\y\3\z\z\p\4\3\v\b\n\0\2\c\z\t\k\9\2\7\q\c\o\o\f\2\d\1\l\h\c\0\f\5\g\1\q\x\t\l\0\h\e\e\7\8\9\i\e\1\l\m\r\6\q\5\2\q\2\8\y\r\k\g\g\g\y\8\o\m\d\1\k\w\x\a\z\t\d\y\a\w\2\b\g\b\s\k\0\8\8\1\a\m\1\y\h\j\2\3\g\v\e\f\r\6\z\5\c\8\j\d\o\d\g\u\0\k\x\n\r\k\r\t\1\6\i\x\v\r\0\0\g\b\3\b\l\n\f\j\y\p\u\p\d\b\s\b\8\i\t\g\q\o\6\n\h\q\o\s\s\n\x\m\w\b\3\8\n\n\x\8\t\3\1\d\6\6\4\c\g\y\u\e\o\d\m\7\n\e\8\p\p\s\e\6\e\x\h\r\9\d\s\q\n\d\2\x\l\h\5\l\r\x\c\4\o\4\9\s\x\6\v\q\t\q\3\q\g\q\2\n\m\u\2\p\l\h\p\g\8\u\2\z\8\v\b\t\g\0\m\e\o\h\9\o\t\e\p\k\x\b\z\z\z\p\7\6\9\u\r\p\u\s\g\k\u\y\h\k\6\8\m\3\m\2\3\a\c\w\t\7\m\u\n\v\5\q\l\d\k\w\q\t\q\p\3\t\1\3\2\s\o\n\x\s\1\d\j\c\p\e\3\p\t\r\a\s\q\e\v\l\7\8\7\h\5\w\9\2\m\6\j\x\l\v\9 ]] 00:07:13.524 10:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:13.524 10:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 9lq0g5g0g7qdaa3mc6t39bz7yfwb8ogqoyp10j4shaqqcuyz0z6f3wc3cp8vwvgjrj1m4xzzu9m6s667nxhfsnr6m555qhdf2k4055dx2cv9zafxnzuxend17y5zy24q9z4wzjnfqrb7fr8ybfrav7ms073952fkrqdao4zn0lk41730qkr8ac1rbp3dhsszeijfuyxl4mkyltnkpmezqb6gu1cydxeujj5m1rrinoqby6ot3zmawkhhrbil9ptnmvl0fsu8dz46xmo4byr66gvhkm8nnvnh9tazn734y8emlkbj1xhkhgwkci3mbb11zi4wqsogbx99npp4teikfuvhwp6n4veqfja5037d53rc05ir0o1xfgtel0di1dgwavdx5sy32bi4a2l1qqt9y4vjvzl1jqdfiow3jyjpd6ukhaq0cz2fwyg7w0qi977rfp1g1m14fpct83iv7gbpabn8p0nrhs65bs05a00wjrzx7ex8t7xnwwzpja1w95xqdrruizb1jelgbk6pky4q636k2pii1hbe63s6ilyy7qs187wzqb3rp695bhcoa7uy7frah9et37wx8cw3mzekzcz0jnpd8t7j4c9p90s8q3sp513tbfzq6p9nmsaoepwnmnvy8ryb84ix66niokqoz9nv5q9n4lpyqus1sey3zzp43vbn02cztk927qcoof2d1lhc0f5g1qxtl0hee789ie1lmr6q52q28yrkgggy8omd1kwxaztdyaw2bgbsk0881am1yhj23gvefr6z5c8jdodgu0kxnrkrt16ixvr00gb3blnfjypupdbsb8itgqo6nhqossnxmwb38nnx8t31d664cgyueodm7ne8ppse6exhr9dsqnd2xlh5lrxc4o49sx6vqtq3qgq2nmu2plhpg8u2z8vbtg0meoh9otepkxbzzzp769urpusgkuyhk68m3m23acwt7munv5qldkwqtqp3t132sonxs1djcpe3ptrasqevl787h5w92m6jxlv9 == \9\l\q\0\g\5\g\0\g\7\q\d\a\a\3\m\c\6\t\3\9\b\z\7\y\f\w\b\8\o\g\q\o\y\p\1\0\j\4\s\h\a\q\q\c\u\y\z\0\z\6\f\3\w\c\3\c\p\8\v\w\v\g\j\r\j\1\m\4\x\z\z\u\9\m\6\s\6\6\7\n\x\h\f\s\n\r\6\m\5\5\5\q\h\d\f\2\k\4\0\5\5\d\x\2\c\v\9\z\a\f\x\n\z\u\x\e\n\d\1\7\y\5\z\y\2\4\q\9\z\4\w\z\j\n\f\q\r\b\7\f\r\8\y\b\f\r\a\v\7\m\s\0\7\3\9\5\2\f\k\r\q\d\a\o\4\z\n\0\l\k\4\1\7\3\0\q\k\r\8\a\c\1\r\b\p\3\d\h\s\s\z\e\i\j\f\u\y\x\l\4\m\k\y\l\t\n\k\p\m\e\z\q\b\6\g\u\1\c\y\d\x\e\u\j\j\5\m\1\r\r\i\n\o\q\b\y\6\o\t\3\z\m\a\w\k\h\h\r\b\i\l\9\p\t\n\m\v\l\0\f\s\u\8\d\z\4\6\x\m\o\4\b\y\r\6\6\g\v\h\k\m\8\n\n\v\n\h\9\t\a\z\n\7\3\4\y\8\e\m\l\k\b\j\1\x\h\k\h\g\w\k\c\i\3\m\b\b\1\1\z\i\4\w\q\s\o\g\b\x\9\9\n\p\p\4\t\e\i\k\f\u\v\h\w\p\6\n\4\v\e\q\f\j\a\5\0\3\7\d\5\3\r\c\0\5\i\r\0\o\1\x\f\g\t\e\l\0\d\i\1\d\g\w\a\v\d\x\5\s\y\3\2\b\i\4\a\2\l\1\q\q\t\9\y\4\v\j\v\z\l\1\j\q\d\f\i\o\w\3\j\y\j\p\d\6\u\k\h\a\q\0\c\z\2\f\w\y\g\7\w\0\q\i\9\7\7\r\f\p\1\g\1\m\1\4\f\p\c\t\8\3\i\v\7\g\b\p\a\b\n\8\p\0\n\r\h\s\6\5\b\s\0\5\a\0\0\w\j\r\z\x\7\e\x\8\t\7\x\n\w\w\z\p\j\a\1\w\9\5\x\q\d\r\r\u\i\z\b\1\j\e\l\g\b\k\6\p\k\y\4\q\6\3\6\k\2\p\i\i\1\h\b\e\6\3\s\6\i\l\y\y\7\q\s\1\8\7\w\z\q\b\3\r\p\6\9\5\b\h\c\o\a\7\u\y\7\f\r\a\h\9\e\t\3\7\w\x\8\c\w\3\m\z\e\k\z\c\z\0\j\n\p\d\8\t\7\j\4\c\9\p\9\0\s\8\q\3\s\p\5\1\3\t\b\f\z\q\6\p\9\n\m\s\a\o\e\p\w\n\m\n\v\y\8\r\y\b\8\4\i\x\6\6\n\i\o\k\q\o\z\9\n\v\5\q\9\n\4\l\p\y\q\u\s\1\s\e\y\3\z\z\p\4\3\v\b\n\0\2\c\z\t\k\9\2\7\q\c\o\o\f\2\d\1\l\h\c\0\f\5\g\1\q\x\t\l\0\h\e\e\7\8\9\i\e\1\l\m\r\6\q\5\2\q\2\8\y\r\k\g\g\g\y\8\o\m\d\1\k\w\x\a\z\t\d\y\a\w\2\b\g\b\s\k\0\8\8\1\a\m\1\y\h\j\2\3\g\v\e\f\r\6\z\5\c\8\j\d\o\d\g\u\0\k\x\n\r\k\r\t\1\6\i\x\v\r\0\0\g\b\3\b\l\n\f\j\y\p\u\p\d\b\s\b\8\i\t\g\q\o\6\n\h\q\o\s\s\n\x\m\w\b\3\8\n\n\x\8\t\3\1\d\6\6\4\c\g\y\u\e\o\d\m\7\n\e\8\p\p\s\e\6\e\x\h\r\9\d\s\q\n\d\2\x\l\h\5\l\r\x\c\4\o\4\9\s\x\6\v\q\t\q\3\q\g\q\2\n\m\u\2\p\l\h\p\g\8\u\2\z\8\v\b\t\g\0\m\e\o\h\9\o\t\e\p\k\x\b\z\z\z\p\7\6\9\u\r\p\u\s\g\k\u\y\h\k\6\8\m\3\m\2\3\a\c\w\t\7\m\u\n\v\5\q\l\d\k\w\q\t\q\p\3\t\1\3\2\s\o\n\x\s\1\d\j\c\p\e\3\p\t\r\a\s\q\e\v\l\7\8\7\h\5\w\9\2\m\6\j\x\l\v\9 ]] 00:07:13.524 10:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:14.092 10:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:14.092 10:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:14.092 10:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:14.092 10:51:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.092 [2024-11-15 10:51:00.814106] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:14.092 [2024-11-15 10:51:00.814188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61148 ] 00:07:14.092 { 00:07:14.092 "subsystems": [ 00:07:14.092 { 00:07:14.092 "subsystem": "bdev", 00:07:14.092 "config": [ 00:07:14.092 { 00:07:14.092 "params": { 00:07:14.092 "block_size": 512, 00:07:14.092 "num_blocks": 1048576, 00:07:14.092 "name": "malloc0" 00:07:14.092 }, 00:07:14.092 "method": "bdev_malloc_create" 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "params": { 00:07:14.092 "filename": "/dev/zram1", 00:07:14.092 "name": "uring0" 00:07:14.092 }, 00:07:14.092 "method": "bdev_uring_create" 00:07:14.092 }, 00:07:14.092 { 00:07:14.092 "method": "bdev_wait_for_examine" 00:07:14.092 } 00:07:14.092 ] 00:07:14.092 } 00:07:14.092 ] 00:07:14.092 } 00:07:14.351 [2024-11-15 10:51:00.961423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.351 [2024-11-15 10:51:01.026719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.351 [2024-11-15 10:51:01.108803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.729  [2024-11-15T10:51:03.526Z] Copying: 193/512 [MB] (193 MBps) [2024-11-15T10:51:04.093Z] Copying: 387/512 [MB] (193 MBps) [2024-11-15T10:51:04.661Z] Copying: 512/512 [MB] (average 194 MBps) 00:07:17.800 00:07:17.800 10:51:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:17.800 10:51:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:17.800 10:51:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:17.800 10:51:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:17.800 10:51:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:17.800 10:51:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:17.800 10:51:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:17.800 10:51:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:17.800 [2024-11-15 10:51:04.546409] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:17.800 [2024-11-15 10:51:04.546514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61204 ] 00:07:17.800 { 00:07:17.800 "subsystems": [ 00:07:17.800 { 00:07:17.800 "subsystem": "bdev", 00:07:17.800 "config": [ 00:07:17.800 { 00:07:17.800 "params": { 00:07:17.800 "block_size": 512, 00:07:17.800 "num_blocks": 1048576, 00:07:17.800 "name": "malloc0" 00:07:17.800 }, 00:07:17.800 "method": "bdev_malloc_create" 00:07:17.800 }, 00:07:17.800 { 00:07:17.800 "params": { 00:07:17.800 "filename": "/dev/zram1", 00:07:17.800 "name": "uring0" 00:07:17.800 }, 00:07:17.800 "method": "bdev_uring_create" 00:07:17.800 }, 00:07:17.800 { 00:07:17.800 "params": { 00:07:17.800 "name": "uring0" 00:07:17.800 }, 00:07:17.800 "method": "bdev_uring_delete" 00:07:17.800 }, 00:07:17.800 { 00:07:17.800 "method": "bdev_wait_for_examine" 00:07:17.800 } 00:07:17.800 ] 00:07:17.800 } 00:07:17.800 ] 00:07:17.800 } 00:07:18.067 [2024-11-15 10:51:04.690442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.067 [2024-11-15 10:51:04.733965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.068 [2024-11-15 10:51:04.801802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.329  [2024-11-15T10:51:05.758Z] Copying: 0/0 [B] (average 0 Bps) 00:07:18.897 00:07:18.897 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:18.897 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:18.897 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.898 10:51:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:18.898 [2024-11-15 10:51:05.661163] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:18.898 [2024-11-15 10:51:05.661280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61241 ] 00:07:18.898 { 00:07:18.898 "subsystems": [ 00:07:18.898 { 00:07:18.898 "subsystem": "bdev", 00:07:18.898 "config": [ 00:07:18.898 { 00:07:18.898 "params": { 00:07:18.898 "block_size": 512, 00:07:18.898 "num_blocks": 1048576, 00:07:18.898 "name": "malloc0" 00:07:18.898 }, 00:07:18.898 "method": "bdev_malloc_create" 00:07:18.898 }, 00:07:18.898 { 00:07:18.898 "params": { 00:07:18.898 "filename": "/dev/zram1", 00:07:18.898 "name": "uring0" 00:07:18.898 }, 00:07:18.898 "method": "bdev_uring_create" 00:07:18.898 }, 00:07:18.898 { 00:07:18.898 "params": { 00:07:18.898 "name": "uring0" 00:07:18.898 }, 00:07:18.898 "method": "bdev_uring_delete" 00:07:18.898 }, 00:07:18.898 { 00:07:18.898 "method": "bdev_wait_for_examine" 00:07:18.898 } 00:07:18.898 ] 00:07:18.898 } 00:07:18.898 ] 00:07:18.898 } 00:07:19.157 [2024-11-15 10:51:05.806221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.157 [2024-11-15 10:51:05.849439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.157 [2024-11-15 10:51:05.921296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.415 [2024-11-15 10:51:06.168709] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:19.415 [2024-11-15 10:51:06.168769] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:19.415 [2024-11-15 10:51:06.168779] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:19.415 [2024-11-15 10:51:06.168789] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.982 [2024-11-15 10:51:06.596162] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:19.982 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:20.240 00:07:20.240 real 0m14.925s 00:07:20.240 ************************************ 00:07:20.240 END TEST dd_uring_copy 00:07:20.240 ************************************ 00:07:20.240 user 0m9.992s 00:07:20.240 sys 0m12.626s 00:07:20.240 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.240 10:51:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:20.240 ************************************ 00:07:20.240 END TEST spdk_dd_uring 00:07:20.240 ************************************ 00:07:20.240 00:07:20.240 real 0m15.171s 00:07:20.240 user 0m10.122s 00:07:20.240 sys 0m12.746s 00:07:20.240 10:51:06 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.240 10:51:06 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:20.240 10:51:07 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:20.240 10:51:07 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.240 10:51:07 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.240 10:51:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:20.240 ************************************ 00:07:20.240 START TEST spdk_dd_sparse 00:07:20.240 ************************************ 00:07:20.240 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:20.500 * Looking for test storage... 00:07:20.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.500 --rc genhtml_branch_coverage=1 00:07:20.500 --rc genhtml_function_coverage=1 00:07:20.500 --rc genhtml_legend=1 00:07:20.500 --rc geninfo_all_blocks=1 00:07:20.500 --rc geninfo_unexecuted_blocks=1 00:07:20.500 00:07:20.500 ' 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.500 --rc genhtml_branch_coverage=1 00:07:20.500 --rc genhtml_function_coverage=1 00:07:20.500 --rc genhtml_legend=1 00:07:20.500 --rc geninfo_all_blocks=1 00:07:20.500 --rc geninfo_unexecuted_blocks=1 00:07:20.500 00:07:20.500 ' 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.500 --rc genhtml_branch_coverage=1 00:07:20.500 --rc genhtml_function_coverage=1 00:07:20.500 --rc genhtml_legend=1 00:07:20.500 --rc geninfo_all_blocks=1 00:07:20.500 --rc geninfo_unexecuted_blocks=1 00:07:20.500 00:07:20.500 ' 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.500 --rc genhtml_branch_coverage=1 00:07:20.500 --rc genhtml_function_coverage=1 00:07:20.500 --rc genhtml_legend=1 00:07:20.500 --rc geninfo_all_blocks=1 00:07:20.500 --rc geninfo_unexecuted_blocks=1 00:07:20.500 00:07:20.500 ' 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:20.500 1+0 records in 00:07:20.500 1+0 records out 00:07:20.500 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0117051 s, 358 MB/s 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:20.500 1+0 records in 00:07:20.500 1+0 records out 00:07:20.500 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00812829 s, 516 MB/s 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:20.500 1+0 records in 00:07:20.500 1+0 records out 00:07:20.500 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00803095 s, 522 MB/s 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:20.500 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:20.501 ************************************ 00:07:20.501 START TEST dd_sparse_file_to_file 00:07:20.501 ************************************ 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:20.501 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:20.501 [2024-11-15 10:51:07.335741] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:20.501 [2024-11-15 10:51:07.335839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61342 ] 00:07:20.501 { 00:07:20.501 "subsystems": [ 00:07:20.501 { 00:07:20.501 "subsystem": "bdev", 00:07:20.501 "config": [ 00:07:20.501 { 00:07:20.501 "params": { 00:07:20.501 "block_size": 4096, 00:07:20.501 "filename": "dd_sparse_aio_disk", 00:07:20.501 "name": "dd_aio" 00:07:20.501 }, 00:07:20.501 "method": "bdev_aio_create" 00:07:20.501 }, 00:07:20.501 { 00:07:20.501 "params": { 00:07:20.501 "lvs_name": "dd_lvstore", 00:07:20.501 "bdev_name": "dd_aio" 00:07:20.501 }, 00:07:20.501 "method": "bdev_lvol_create_lvstore" 00:07:20.501 }, 00:07:20.501 { 00:07:20.501 "method": "bdev_wait_for_examine" 00:07:20.501 } 00:07:20.501 ] 00:07:20.501 } 00:07:20.501 ] 00:07:20.501 } 00:07:20.759 [2024-11-15 10:51:07.472342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.759 [2024-11-15 10:51:07.524688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.759 [2024-11-15 10:51:07.592987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.018  [2024-11-15T10:51:08.138Z] Copying: 12/36 [MB] (average 857 MBps) 00:07:21.277 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:21.277 00:07:21.277 real 0m0.701s 00:07:21.277 user 0m0.433s 00:07:21.277 sys 0m0.397s 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.277 10:51:07 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:21.277 ************************************ 00:07:21.277 END TEST dd_sparse_file_to_file 00:07:21.277 ************************************ 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:21.277 ************************************ 00:07:21.277 START TEST dd_sparse_file_to_bdev 00:07:21.277 ************************************ 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:21.277 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:21.277 [2024-11-15 10:51:08.106282] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:21.277 [2024-11-15 10:51:08.106397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61384 ] 00:07:21.277 { 00:07:21.277 "subsystems": [ 00:07:21.277 { 00:07:21.277 "subsystem": "bdev", 00:07:21.277 "config": [ 00:07:21.277 { 00:07:21.277 "params": { 00:07:21.277 "block_size": 4096, 00:07:21.278 "filename": "dd_sparse_aio_disk", 00:07:21.278 "name": "dd_aio" 00:07:21.278 }, 00:07:21.278 "method": "bdev_aio_create" 00:07:21.278 }, 00:07:21.278 { 00:07:21.278 "params": { 00:07:21.278 "lvs_name": "dd_lvstore", 00:07:21.278 "lvol_name": "dd_lvol", 00:07:21.278 "size_in_mib": 36, 00:07:21.278 "thin_provision": true 00:07:21.278 }, 00:07:21.278 "method": "bdev_lvol_create" 00:07:21.278 }, 00:07:21.278 { 00:07:21.278 "method": "bdev_wait_for_examine" 00:07:21.278 } 00:07:21.278 ] 00:07:21.278 } 00:07:21.278 ] 00:07:21.278 } 00:07:21.537 [2024-11-15 10:51:08.250583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.537 [2024-11-15 10:51:08.297404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.537 [2024-11-15 10:51:08.365230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.795  [2024-11-15T10:51:08.915Z] Copying: 12/36 [MB] (average 400 MBps) 00:07:22.054 00:07:22.054 00:07:22.054 real 0m0.695s 00:07:22.054 user 0m0.470s 00:07:22.054 sys 0m0.384s 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:22.054 ************************************ 00:07:22.054 END TEST dd_sparse_file_to_bdev 00:07:22.054 ************************************ 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:22.054 ************************************ 00:07:22.054 START TEST dd_sparse_bdev_to_file 00:07:22.054 ************************************ 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:22.054 10:51:08 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:22.054 [2024-11-15 10:51:08.849035] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:22.054 [2024-11-15 10:51:08.849639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61422 ] 00:07:22.054 { 00:07:22.054 "subsystems": [ 00:07:22.054 { 00:07:22.054 "subsystem": "bdev", 00:07:22.054 "config": [ 00:07:22.054 { 00:07:22.054 "params": { 00:07:22.054 "block_size": 4096, 00:07:22.054 "filename": "dd_sparse_aio_disk", 00:07:22.054 "name": "dd_aio" 00:07:22.054 }, 00:07:22.054 "method": "bdev_aio_create" 00:07:22.054 }, 00:07:22.054 { 00:07:22.054 "method": "bdev_wait_for_examine" 00:07:22.054 } 00:07:22.054 ] 00:07:22.054 } 00:07:22.054 ] 00:07:22.054 } 00:07:22.312 [2024-11-15 10:51:08.994087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.312 [2024-11-15 10:51:09.038366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.312 [2024-11-15 10:51:09.105111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.570  [2024-11-15T10:51:09.690Z] Copying: 12/36 [MB] (average 857 MBps) 00:07:22.829 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:22.829 00:07:22.829 real 0m0.684s 00:07:22.829 user 0m0.435s 00:07:22.829 sys 0m0.385s 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:22.829 ************************************ 00:07:22.829 END TEST dd_sparse_bdev_to_file 00:07:22.829 ************************************ 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:22.829 00:07:22.829 real 0m2.523s 00:07:22.829 user 0m1.516s 00:07:22.829 sys 0m1.404s 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.829 10:51:09 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:22.829 ************************************ 00:07:22.829 END TEST spdk_dd_sparse 00:07:22.829 ************************************ 00:07:22.829 10:51:09 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:22.829 10:51:09 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.829 10:51:09 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.829 10:51:09 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:22.829 ************************************ 00:07:22.829 START TEST spdk_dd_negative 00:07:22.829 ************************************ 00:07:22.829 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:22.829 * Looking for test storage... 00:07:23.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:23.089 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.090 --rc genhtml_branch_coverage=1 00:07:23.090 --rc genhtml_function_coverage=1 00:07:23.090 --rc genhtml_legend=1 00:07:23.090 --rc geninfo_all_blocks=1 00:07:23.090 --rc geninfo_unexecuted_blocks=1 00:07:23.090 00:07:23.090 ' 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.090 --rc genhtml_branch_coverage=1 00:07:23.090 --rc genhtml_function_coverage=1 00:07:23.090 --rc genhtml_legend=1 00:07:23.090 --rc geninfo_all_blocks=1 00:07:23.090 --rc geninfo_unexecuted_blocks=1 00:07:23.090 00:07:23.090 ' 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.090 --rc genhtml_branch_coverage=1 00:07:23.090 --rc genhtml_function_coverage=1 00:07:23.090 --rc genhtml_legend=1 00:07:23.090 --rc geninfo_all_blocks=1 00:07:23.090 --rc geninfo_unexecuted_blocks=1 00:07:23.090 00:07:23.090 ' 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:23.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.090 --rc genhtml_branch_coverage=1 00:07:23.090 --rc genhtml_function_coverage=1 00:07:23.090 --rc genhtml_legend=1 00:07:23.090 --rc geninfo_all_blocks=1 00:07:23.090 --rc geninfo_unexecuted_blocks=1 00:07:23.090 00:07:23.090 ' 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.090 ************************************ 00:07:23.090 START TEST dd_invalid_arguments 00:07:23.090 ************************************ 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.090 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.091 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.091 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.091 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.091 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:23.091 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:23.091 00:07:23.091 CPU options: 00:07:23.091 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:23.091 (like [0,1,10]) 00:07:23.091 --lcores lcore to CPU mapping list. The list is in the format: 00:07:23.091 [<,lcores[@CPUs]>...] 00:07:23.091 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:23.091 Within the group, '-' is used for range separator, 00:07:23.091 ',' is used for single number separator. 00:07:23.091 '( )' can be omitted for single element group, 00:07:23.091 '@' can be omitted if cpus and lcores have the same value 00:07:23.091 --disable-cpumask-locks Disable CPU core lock files. 00:07:23.091 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:23.091 pollers in the app support interrupt mode) 00:07:23.091 -p, --main-core main (primary) core for DPDK 00:07:23.091 00:07:23.091 Configuration options: 00:07:23.091 -c, --config, --json JSON config file 00:07:23.091 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:23.091 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:23.091 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:23.091 --rpcs-allowed comma-separated list of permitted RPCS 00:07:23.091 --json-ignore-init-errors don't exit on invalid config entry 00:07:23.091 00:07:23.091 Memory options: 00:07:23.091 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:23.091 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:23.091 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:23.091 -R, --huge-unlink unlink huge files after initialization 00:07:23.091 -n, --mem-channels number of memory channels used for DPDK 00:07:23.091 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:23.091 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:23.091 --no-huge run without using hugepages 00:07:23.091 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:23.091 -i, --shm-id shared memory ID (optional) 00:07:23.091 -g, --single-file-segments force creating just one hugetlbfs file 00:07:23.091 00:07:23.091 PCI options: 00:07:23.091 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:23.091 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:23.091 -u, --no-pci disable PCI access 00:07:23.091 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:23.091 00:07:23.091 Log options: 00:07:23.091 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:23.091 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:23.091 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:23.091 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:23.091 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:23.091 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:23.091 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:23.091 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:23.091 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:23.091 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:23.091 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:23.091 --silence-noticelog disable notice level logging to stderr 00:07:23.091 00:07:23.091 Trace options: 00:07:23.091 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:23.091 setting 0 to disable trace (default 32768) 00:07:23.091 Tracepoints vary in size and can use more than one trace entry. 00:07:23.091 -e, --tpoint-group [:] 00:07:23.091 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:23.091 [2024-11-15 10:51:09.858610] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:23.091 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:23.091 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:23.091 bdev_raid, scheduler, all). 00:07:23.091 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:23.091 a tracepoint group. First tpoint inside a group can be enabled by 00:07:23.091 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:23.091 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:23.091 in /include/spdk_internal/trace_defs.h 00:07:23.091 00:07:23.091 Other options: 00:07:23.091 -h, --help show this usage 00:07:23.091 -v, --version print SPDK version 00:07:23.091 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:23.091 --env-context Opaque context for use of the env implementation 00:07:23.091 00:07:23.091 Application specific: 00:07:23.091 [--------- DD Options ---------] 00:07:23.091 --if Input file. Must specify either --if or --ib. 00:07:23.091 --ib Input bdev. Must specifier either --if or --ib 00:07:23.091 --of Output file. Must specify either --of or --ob. 00:07:23.091 --ob Output bdev. Must specify either --of or --ob. 00:07:23.091 --iflag Input file flags. 00:07:23.091 --oflag Output file flags. 00:07:23.091 --bs I/O unit size (default: 4096) 00:07:23.091 --qd Queue depth (default: 2) 00:07:23.091 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:23.091 --skip Skip this many I/O units at start of input. (default: 0) 00:07:23.091 --seek Skip this many I/O units at start of output. (default: 0) 00:07:23.091 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:23.091 --sparse Enable hole skipping in input target 00:07:23.091 Available iflag and oflag values: 00:07:23.091 append - append mode 00:07:23.091 direct - use direct I/O for data 00:07:23.091 directory - fail unless a directory 00:07:23.091 dsync - use synchronized I/O for data 00:07:23.092 noatime - do not update access time 00:07:23.092 noctty - do not assign controlling terminal from file 00:07:23.092 nofollow - do not follow symlinks 00:07:23.092 nonblock - use non-blocking I/O 00:07:23.092 sync - use synchronized I/O for data and metadata 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.092 00:07:23.092 real 0m0.059s 00:07:23.092 user 0m0.037s 00:07:23.092 sys 0m0.022s 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:23.092 ************************************ 00:07:23.092 END TEST dd_invalid_arguments 00:07:23.092 ************************************ 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.092 ************************************ 00:07:23.092 START TEST dd_double_input 00:07:23.092 ************************************ 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.092 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:23.350 [2024-11-15 10:51:09.959373] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:23.350 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:07:23.350 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.350 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.350 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.350 00:07:23.350 real 0m0.055s 00:07:23.350 user 0m0.036s 00:07:23.350 sys 0m0.018s 00:07:23.350 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.350 10:51:09 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:23.350 ************************************ 00:07:23.350 END TEST dd_double_input 00:07:23.350 ************************************ 00:07:23.350 10:51:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:23.350 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.350 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.350 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.350 ************************************ 00:07:23.350 START TEST dd_double_output 00:07:23.350 ************************************ 00:07:23.350 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:23.351 [2024-11-15 10:51:10.093223] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.351 00:07:23.351 real 0m0.089s 00:07:23.351 user 0m0.054s 00:07:23.351 sys 0m0.033s 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:23.351 ************************************ 00:07:23.351 END TEST dd_double_output 00:07:23.351 ************************************ 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.351 ************************************ 00:07:23.351 START TEST dd_no_input 00:07:23.351 ************************************ 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.351 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:23.609 [2024-11-15 10:51:10.225508] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:23.609 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:07:23.609 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.609 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.610 00:07:23.610 real 0m0.079s 00:07:23.610 user 0m0.048s 00:07:23.610 sys 0m0.030s 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:23.610 ************************************ 00:07:23.610 END TEST dd_no_input 00:07:23.610 ************************************ 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.610 ************************************ 00:07:23.610 START TEST dd_no_output 00:07:23.610 ************************************ 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.610 [2024-11-15 10:51:10.365039] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.610 00:07:23.610 real 0m0.080s 00:07:23.610 user 0m0.052s 00:07:23.610 sys 0m0.026s 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:23.610 ************************************ 00:07:23.610 END TEST dd_no_output 00:07:23.610 ************************************ 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.610 ************************************ 00:07:23.610 START TEST dd_wrong_blocksize 00:07:23.610 ************************************ 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.610 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:23.898 [2024-11-15 10:51:10.494479] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.898 00:07:23.898 real 0m0.076s 00:07:23.898 user 0m0.044s 00:07:23.898 sys 0m0.031s 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:23.898 ************************************ 00:07:23.898 END TEST dd_wrong_blocksize 00:07:23.898 ************************************ 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.898 ************************************ 00:07:23.898 START TEST dd_smaller_blocksize 00:07:23.898 ************************************ 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.898 10:51:10 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:23.898 [2024-11-15 10:51:10.634357] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:23.898 [2024-11-15 10:51:10.634468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61649 ] 00:07:24.188 [2024-11-15 10:51:10.786869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.188 [2024-11-15 10:51:10.849190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.188 [2024-11-15 10:51:10.923615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.446 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:24.705 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:24.705 [2024-11-15 10:51:11.552690] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:24.705 [2024-11-15 10:51:11.552756] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.964 [2024-11-15 10:51:11.713764] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:24.964 10:51:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:07:24.964 10:51:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.964 10:51:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:07:24.964 10:51:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:07:24.964 10:51:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:07:24.964 10:51:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.964 00:07:24.964 real 0m1.226s 00:07:24.964 user 0m0.452s 00:07:24.964 sys 0m0.666s 00:07:24.964 10:51:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.964 ************************************ 00:07:24.964 END TEST dd_smaller_blocksize 00:07:24.964 ************************************ 00:07:24.964 10:51:11 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:25.224 ************************************ 00:07:25.224 START TEST dd_invalid_count 00:07:25.224 ************************************ 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:25.224 [2024-11-15 10:51:11.915087] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.224 00:07:25.224 real 0m0.081s 00:07:25.224 user 0m0.048s 00:07:25.224 sys 0m0.032s 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.224 ************************************ 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:25.224 END TEST dd_invalid_count 00:07:25.224 ************************************ 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:25.224 ************************************ 00:07:25.224 START TEST dd_invalid_oflag 00:07:25.224 ************************************ 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.224 10:51:11 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:25.224 [2024-11-15 10:51:12.052029] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:25.224 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:07:25.224 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.224 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.224 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.224 00:07:25.224 real 0m0.079s 00:07:25.224 user 0m0.050s 00:07:25.224 sys 0m0.029s 00:07:25.224 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.224 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:25.224 ************************************ 00:07:25.224 END TEST dd_invalid_oflag 00:07:25.224 ************************************ 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:25.483 ************************************ 00:07:25.483 START TEST dd_invalid_iflag 00:07:25.483 ************************************ 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.483 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:25.484 [2024-11-15 10:51:12.192967] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.484 00:07:25.484 real 0m0.082s 00:07:25.484 user 0m0.051s 00:07:25.484 sys 0m0.030s 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:25.484 ************************************ 00:07:25.484 END TEST dd_invalid_iflag 00:07:25.484 ************************************ 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:25.484 ************************************ 00:07:25.484 START TEST dd_unknown_flag 00:07:25.484 ************************************ 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.484 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:25.484 [2024-11-15 10:51:12.326452] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:25.484 [2024-11-15 10:51:12.326560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61746 ] 00:07:25.743 [2024-11-15 10:51:12.472581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.743 [2024-11-15 10:51:12.523599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.743 [2024-11-15 10:51:12.593499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.001 [2024-11-15 10:51:12.635078] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:26.001 [2024-11-15 10:51:12.635145] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.001 [2024-11-15 10:51:12.635209] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:26.001 [2024-11-15 10:51:12.635221] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.001 [2024-11-15 10:51:12.635536] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:26.001 [2024-11-15 10:51:12.635556] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.001 [2024-11-15 10:51:12.635615] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:26.001 [2024-11-15 10:51:12.635625] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:26.001 [2024-11-15 10:51:12.788733] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:26.259 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.260 00:07:26.260 real 0m0.605s 00:07:26.260 user 0m0.328s 00:07:26.260 sys 0m0.185s 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:26.260 ************************************ 00:07:26.260 END TEST dd_unknown_flag 00:07:26.260 ************************************ 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:26.260 ************************************ 00:07:26.260 START TEST dd_invalid_json 00:07:26.260 ************************************ 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:26.260 10:51:12 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:26.260 [2024-11-15 10:51:12.994986] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:26.260 [2024-11-15 10:51:12.995089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61780 ] 00:07:26.520 [2024-11-15 10:51:13.146302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.520 [2024-11-15 10:51:13.191826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.520 [2024-11-15 10:51:13.191899] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:26.520 [2024-11-15 10:51:13.191916] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:26.520 [2024-11-15 10:51:13.191935] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.520 [2024-11-15 10:51:13.191979] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.520 00:07:26.520 real 0m0.325s 00:07:26.520 user 0m0.163s 00:07:26.520 sys 0m0.060s 00:07:26.520 ************************************ 00:07:26.520 END TEST dd_invalid_json 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:26.520 ************************************ 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:26.520 ************************************ 00:07:26.520 START TEST dd_invalid_seek 00:07:26.520 ************************************ 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:26.520 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:26.520 [2024-11-15 10:51:13.370169] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:26.520 [2024-11-15 10:51:13.370816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61804 ] 00:07:26.521 { 00:07:26.521 "subsystems": [ 00:07:26.521 { 00:07:26.521 "subsystem": "bdev", 00:07:26.521 "config": [ 00:07:26.521 { 00:07:26.521 "params": { 00:07:26.521 "block_size": 512, 00:07:26.521 "num_blocks": 512, 00:07:26.521 "name": "malloc0" 00:07:26.521 }, 00:07:26.521 "method": "bdev_malloc_create" 00:07:26.521 }, 00:07:26.521 { 00:07:26.521 "params": { 00:07:26.521 "block_size": 512, 00:07:26.521 "num_blocks": 512, 00:07:26.521 "name": "malloc1" 00:07:26.521 }, 00:07:26.521 "method": "bdev_malloc_create" 00:07:26.521 }, 00:07:26.521 { 00:07:26.521 "method": "bdev_wait_for_examine" 00:07:26.521 } 00:07:26.521 ] 00:07:26.521 } 00:07:26.521 ] 00:07:26.521 } 00:07:26.779 [2024-11-15 10:51:13.514952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.779 [2024-11-15 10:51:13.563310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.779 [2024-11-15 10:51:13.631346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.038 [2024-11-15 10:51:13.698980] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:27.038 [2024-11-15 10:51:13.699044] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.039 [2024-11-15 10:51:13.850200] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:27.297 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:07:27.297 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.298 00:07:27.298 real 0m0.610s 00:07:27.298 user 0m0.386s 00:07:27.298 sys 0m0.187s 00:07:27.298 ************************************ 00:07:27.298 END TEST dd_invalid_seek 00:07:27.298 ************************************ 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.298 ************************************ 00:07:27.298 START TEST dd_invalid_skip 00:07:27.298 ************************************ 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.298 10:51:13 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:27.298 { 00:07:27.298 "subsystems": [ 00:07:27.298 { 00:07:27.298 "subsystem": "bdev", 00:07:27.298 "config": [ 00:07:27.298 { 00:07:27.298 "params": { 00:07:27.298 "block_size": 512, 00:07:27.298 "num_blocks": 512, 00:07:27.298 "name": "malloc0" 00:07:27.298 }, 00:07:27.298 "method": "bdev_malloc_create" 00:07:27.298 }, 00:07:27.298 { 00:07:27.298 "params": { 00:07:27.298 "block_size": 512, 00:07:27.298 "num_blocks": 512, 00:07:27.298 "name": "malloc1" 00:07:27.298 }, 00:07:27.298 "method": "bdev_malloc_create" 00:07:27.298 }, 00:07:27.298 { 00:07:27.298 "method": "bdev_wait_for_examine" 00:07:27.298 } 00:07:27.298 ] 00:07:27.298 } 00:07:27.298 ] 00:07:27.298 } 00:07:27.298 [2024-11-15 10:51:14.044723] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:27.298 [2024-11-15 10:51:14.044851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61843 ] 00:07:27.557 [2024-11-15 10:51:14.190092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.557 [2024-11-15 10:51:14.235658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.557 [2024-11-15 10:51:14.303993] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.557 [2024-11-15 10:51:14.372043] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:27.557 [2024-11-15 10:51:14.372103] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.816 [2024-11-15 10:51:14.527744] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.816 00:07:27.816 real 0m0.634s 00:07:27.816 user 0m0.416s 00:07:27.816 sys 0m0.178s 00:07:27.816 ************************************ 00:07:27.816 END TEST dd_invalid_skip 00:07:27.816 ************************************ 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.816 ************************************ 00:07:27.816 START TEST dd_invalid_input_count 00:07:27.816 ************************************ 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.816 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.074 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.074 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.074 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.074 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.074 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.074 10:51:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:28.074 { 00:07:28.074 "subsystems": [ 00:07:28.074 { 00:07:28.074 "subsystem": "bdev", 00:07:28.074 "config": [ 00:07:28.074 { 00:07:28.074 "params": { 00:07:28.074 "block_size": 512, 00:07:28.074 "num_blocks": 512, 00:07:28.074 "name": "malloc0" 00:07:28.074 }, 00:07:28.074 "method": "bdev_malloc_create" 00:07:28.074 }, 00:07:28.074 { 00:07:28.074 "params": { 00:07:28.074 "block_size": 512, 00:07:28.074 "num_blocks": 512, 00:07:28.074 "name": "malloc1" 00:07:28.074 }, 00:07:28.074 "method": "bdev_malloc_create" 00:07:28.074 }, 00:07:28.074 { 00:07:28.074 "method": "bdev_wait_for_examine" 00:07:28.074 } 00:07:28.074 ] 00:07:28.074 } 00:07:28.074 ] 00:07:28.074 } 00:07:28.074 [2024-11-15 10:51:14.733105] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:28.074 [2024-11-15 10:51:14.733352] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61877 ] 00:07:28.074 [2024-11-15 10:51:14.878241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.074 [2024-11-15 10:51:14.926415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.333 [2024-11-15 10:51:14.995149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.333 [2024-11-15 10:51:15.062932] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:28.333 [2024-11-15 10:51:15.062997] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.592 [2024-11-15 10:51:15.216659] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:28.592 ************************************ 00:07:28.592 END TEST dd_invalid_input_count 00:07:28.592 ************************************ 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.592 00:07:28.592 real 0m0.633s 00:07:28.592 user 0m0.408s 00:07:28.592 sys 0m0.183s 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.592 ************************************ 00:07:28.592 START TEST dd_invalid_output_count 00:07:28.592 ************************************ 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.592 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:28.592 { 00:07:28.592 "subsystems": [ 00:07:28.592 { 00:07:28.592 "subsystem": "bdev", 00:07:28.592 "config": [ 00:07:28.592 { 00:07:28.592 "params": { 00:07:28.592 "block_size": 512, 00:07:28.592 "num_blocks": 512, 00:07:28.592 "name": "malloc0" 00:07:28.592 }, 00:07:28.592 "method": "bdev_malloc_create" 00:07:28.592 }, 00:07:28.592 { 00:07:28.592 "method": "bdev_wait_for_examine" 00:07:28.592 } 00:07:28.592 ] 00:07:28.592 } 00:07:28.592 ] 00:07:28.592 } 00:07:28.592 [2024-11-15 10:51:15.423162] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:28.592 [2024-11-15 10:51:15.423418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61910 ] 00:07:28.852 [2024-11-15 10:51:15.566950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.852 [2024-11-15 10:51:15.611370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.852 [2024-11-15 10:51:15.680011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.110 [2024-11-15 10:51:15.739785] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:29.110 [2024-11-15 10:51:15.740105] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.110 [2024-11-15 10:51:15.891365] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.370 ************************************ 00:07:29.370 END TEST dd_invalid_output_count 00:07:29.370 ************************************ 00:07:29.370 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:07:29.370 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.370 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:07:29.370 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:29.370 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:07:29.370 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.370 00:07:29.370 real 0m0.619s 00:07:29.370 user 0m0.396s 00:07:29.370 sys 0m0.178s 00:07:29.370 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.370 10:51:15 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:29.370 ************************************ 00:07:29.370 START TEST dd_bs_not_multiple 00:07:29.370 ************************************ 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.370 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:29.370 [2024-11-15 10:51:16.094117] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:29.370 [2024-11-15 10:51:16.094211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61947 ] 00:07:29.370 { 00:07:29.370 "subsystems": [ 00:07:29.370 { 00:07:29.370 "subsystem": "bdev", 00:07:29.370 "config": [ 00:07:29.370 { 00:07:29.370 "params": { 00:07:29.370 "block_size": 512, 00:07:29.370 "num_blocks": 512, 00:07:29.370 "name": "malloc0" 00:07:29.370 }, 00:07:29.370 "method": "bdev_malloc_create" 00:07:29.370 }, 00:07:29.370 { 00:07:29.370 "params": { 00:07:29.370 "block_size": 512, 00:07:29.370 "num_blocks": 512, 00:07:29.370 "name": "malloc1" 00:07:29.370 }, 00:07:29.370 "method": "bdev_malloc_create" 00:07:29.370 }, 00:07:29.370 { 00:07:29.370 "method": "bdev_wait_for_examine" 00:07:29.370 } 00:07:29.370 ] 00:07:29.370 } 00:07:29.370 ] 00:07:29.370 } 00:07:29.629 [2024-11-15 10:51:16.230805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.629 [2024-11-15 10:51:16.280132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.629 [2024-11-15 10:51:16.348752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.629 [2024-11-15 10:51:16.416472] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:29.629 [2024-11-15 10:51:16.416547] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.888 [2024-11-15 10:51:16.568746] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.888 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:07:29.888 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.888 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:07:29.888 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:07:29.888 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:07:29.888 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.888 00:07:29.888 real 0m0.615s 00:07:29.888 user 0m0.400s 00:07:29.888 sys 0m0.174s 00:07:29.888 ************************************ 00:07:29.888 END TEST dd_bs_not_multiple 00:07:29.888 ************************************ 00:07:29.888 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.888 10:51:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:29.888 ************************************ 00:07:29.888 END TEST spdk_dd_negative 00:07:29.888 ************************************ 00:07:29.888 00:07:29.888 real 0m7.100s 00:07:29.888 user 0m3.739s 00:07:29.888 sys 0m2.761s 00:07:29.888 10:51:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.888 10:51:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:30.147 ************************************ 00:07:30.147 END TEST spdk_dd 00:07:30.147 ************************************ 00:07:30.147 00:07:30.147 real 1m22.196s 00:07:30.147 user 0m52.226s 00:07:30.147 sys 0m37.942s 00:07:30.147 10:51:16 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.147 10:51:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:30.147 10:51:16 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:30.147 10:51:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:30.147 10:51:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:30.147 10:51:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.147 10:51:16 -- common/autotest_common.sh@10 -- # set +x 00:07:30.147 10:51:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:30.147 10:51:16 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:30.147 10:51:16 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:30.147 10:51:16 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:30.147 10:51:16 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:30.147 10:51:16 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:30.147 10:51:16 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.147 10:51:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.147 10:51:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.147 10:51:16 -- common/autotest_common.sh@10 -- # set +x 00:07:30.147 ************************************ 00:07:30.147 START TEST nvmf_tcp 00:07:30.147 ************************************ 00:07:30.147 10:51:16 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:30.147 * Looking for test storage... 00:07:30.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:30.147 10:51:16 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.147 10:51:16 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.147 10:51:16 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.406 10:51:17 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.406 10:51:17 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:30.406 10:51:17 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.406 10:51:17 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.406 --rc genhtml_branch_coverage=1 00:07:30.406 --rc genhtml_function_coverage=1 00:07:30.406 --rc genhtml_legend=1 00:07:30.406 --rc geninfo_all_blocks=1 00:07:30.406 --rc geninfo_unexecuted_blocks=1 00:07:30.406 00:07:30.406 ' 00:07:30.407 10:51:17 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.407 --rc genhtml_branch_coverage=1 00:07:30.407 --rc genhtml_function_coverage=1 00:07:30.407 --rc genhtml_legend=1 00:07:30.407 --rc geninfo_all_blocks=1 00:07:30.407 --rc geninfo_unexecuted_blocks=1 00:07:30.407 00:07:30.407 ' 00:07:30.407 10:51:17 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.407 --rc genhtml_branch_coverage=1 00:07:30.407 --rc genhtml_function_coverage=1 00:07:30.407 --rc genhtml_legend=1 00:07:30.407 --rc geninfo_all_blocks=1 00:07:30.407 --rc geninfo_unexecuted_blocks=1 00:07:30.407 00:07:30.407 ' 00:07:30.407 10:51:17 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.407 --rc genhtml_branch_coverage=1 00:07:30.407 --rc genhtml_function_coverage=1 00:07:30.407 --rc genhtml_legend=1 00:07:30.407 --rc geninfo_all_blocks=1 00:07:30.407 --rc geninfo_unexecuted_blocks=1 00:07:30.407 00:07:30.407 ' 00:07:30.407 10:51:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:30.407 10:51:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.407 10:51:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:30.407 10:51:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.407 10:51:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.407 10:51:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.407 ************************************ 00:07:30.407 START TEST nvmf_target_core 00:07:30.407 ************************************ 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:30.407 * Looking for test storage... 00:07:30.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.407 --rc genhtml_branch_coverage=1 00:07:30.407 --rc genhtml_function_coverage=1 00:07:30.407 --rc genhtml_legend=1 00:07:30.407 --rc geninfo_all_blocks=1 00:07:30.407 --rc geninfo_unexecuted_blocks=1 00:07:30.407 00:07:30.407 ' 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.407 --rc genhtml_branch_coverage=1 00:07:30.407 --rc genhtml_function_coverage=1 00:07:30.407 --rc genhtml_legend=1 00:07:30.407 --rc geninfo_all_blocks=1 00:07:30.407 --rc geninfo_unexecuted_blocks=1 00:07:30.407 00:07:30.407 ' 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.407 --rc genhtml_branch_coverage=1 00:07:30.407 --rc genhtml_function_coverage=1 00:07:30.407 --rc genhtml_legend=1 00:07:30.407 --rc geninfo_all_blocks=1 00:07:30.407 --rc geninfo_unexecuted_blocks=1 00:07:30.407 00:07:30.407 ' 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.407 --rc genhtml_branch_coverage=1 00:07:30.407 --rc genhtml_function_coverage=1 00:07:30.407 --rc genhtml_legend=1 00:07:30.407 --rc geninfo_all_blocks=1 00:07:30.407 --rc geninfo_unexecuted_blocks=1 00:07:30.407 00:07:30.407 ' 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.407 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.668 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.668 ************************************ 00:07:30.668 START TEST nvmf_host_management 00:07:30.668 ************************************ 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.668 * Looking for test storage... 00:07:30.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.668 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.669 --rc genhtml_branch_coverage=1 00:07:30.669 --rc genhtml_function_coverage=1 00:07:30.669 --rc genhtml_legend=1 00:07:30.669 --rc geninfo_all_blocks=1 00:07:30.669 --rc geninfo_unexecuted_blocks=1 00:07:30.669 00:07:30.669 ' 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.669 --rc genhtml_branch_coverage=1 00:07:30.669 --rc genhtml_function_coverage=1 00:07:30.669 --rc genhtml_legend=1 00:07:30.669 --rc geninfo_all_blocks=1 00:07:30.669 --rc geninfo_unexecuted_blocks=1 00:07:30.669 00:07:30.669 ' 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.669 --rc genhtml_branch_coverage=1 00:07:30.669 --rc genhtml_function_coverage=1 00:07:30.669 --rc genhtml_legend=1 00:07:30.669 --rc geninfo_all_blocks=1 00:07:30.669 --rc geninfo_unexecuted_blocks=1 00:07:30.669 00:07:30.669 ' 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.669 --rc genhtml_branch_coverage=1 00:07:30.669 --rc genhtml_function_coverage=1 00:07:30.669 --rc genhtml_legend=1 00:07:30.669 --rc geninfo_all_blocks=1 00:07:30.669 --rc geninfo_unexecuted_blocks=1 00:07:30.669 00:07:30.669 ' 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.669 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:30.669 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:30.670 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:30.929 Cannot find device "nvmf_init_br" 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:30.929 Cannot find device "nvmf_init_br2" 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:30.929 Cannot find device "nvmf_tgt_br" 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:30.929 Cannot find device "nvmf_tgt_br2" 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:30.929 Cannot find device "nvmf_init_br" 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:30.929 Cannot find device "nvmf_init_br2" 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:30.929 Cannot find device "nvmf_tgt_br" 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:30.929 Cannot find device "nvmf_tgt_br2" 00:07:30.929 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:30.930 Cannot find device "nvmf_br" 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:30.930 Cannot find device "nvmf_init_if" 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:30.930 Cannot find device "nvmf_init_if2" 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:30.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:30.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:30.930 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:31.189 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:31.189 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:07:31.189 00:07:31.189 --- 10.0.0.3 ping statistics --- 00:07:31.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.189 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:31.189 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:31.189 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:07:31.189 00:07:31.189 --- 10.0.0.4 ping statistics --- 00:07:31.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.189 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:31.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:31.189 00:07:31.189 --- 10.0.0.1 ping statistics --- 00:07:31.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.189 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:31.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:07:31.189 00:07:31.189 --- 10.0.0.2 ping statistics --- 00:07:31.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.189 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:31.189 10:51:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62286 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62286 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62286 ']' 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.189 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.448 [2024-11-15 10:51:18.088159] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:31.448 [2024-11-15 10:51:18.088253] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.448 [2024-11-15 10:51:18.244745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.706 [2024-11-15 10:51:18.318342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.706 [2024-11-15 10:51:18.318410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.706 [2024-11-15 10:51:18.318425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.706 [2024-11-15 10:51:18.318435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.706 [2024-11-15 10:51:18.318444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.706 [2024-11-15 10:51:18.319720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.706 [2024-11-15 10:51:18.319857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.706 [2024-11-15 10:51:18.320001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:31.706 [2024-11-15 10:51:18.320007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.706 [2024-11-15 10:51:18.378766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.706 [2024-11-15 10:51:18.500796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.706 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.965 Malloc0 00:07:31.965 [2024-11-15 10:51:18.591495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62334 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62334 /var/tmp/bdevperf.sock 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62334 ']' 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:31.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:31.965 { 00:07:31.965 "params": { 00:07:31.965 "name": "Nvme$subsystem", 00:07:31.965 "trtype": "$TEST_TRANSPORT", 00:07:31.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.965 "adrfam": "ipv4", 00:07:31.965 "trsvcid": "$NVMF_PORT", 00:07:31.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.965 "hdgst": ${hdgst:-false}, 00:07:31.965 "ddgst": ${ddgst:-false} 00:07:31.965 }, 00:07:31.965 "method": "bdev_nvme_attach_controller" 00:07:31.965 } 00:07:31.965 EOF 00:07:31.965 )") 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:31.965 10:51:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:31.965 "params": { 00:07:31.965 "name": "Nvme0", 00:07:31.965 "trtype": "tcp", 00:07:31.965 "traddr": "10.0.0.3", 00:07:31.965 "adrfam": "ipv4", 00:07:31.965 "trsvcid": "4420", 00:07:31.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:31.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:31.965 "hdgst": false, 00:07:31.965 "ddgst": false 00:07:31.965 }, 00:07:31.965 "method": "bdev_nvme_attach_controller" 00:07:31.965 }' 00:07:31.965 [2024-11-15 10:51:18.705414] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:31.965 [2024-11-15 10:51:18.705548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62334 ] 00:07:32.224 [2024-11-15 10:51:18.858652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.224 [2024-11-15 10:51:18.943655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.224 [2024-11-15 10:51:19.028907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.482 Running I/O for 10 seconds... 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.050 [2024-11-15 10:51:19.833248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.050 [2024-11-15 10:51:19.833333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.833354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.050 [2024-11-15 10:51:19.833362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.833371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.050 [2024-11-15 10:51:19.833380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.833389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:33.050 [2024-11-15 10:51:19.833397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.833405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4ce0 is same with the state(6) to be set 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.050 10:51:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:33.050 [2024-11-15 10:51:19.849701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.050 [2024-11-15 10:51:19.849972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.050 [2024-11-15 10:51:19.849981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.849989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.849998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.051 [2024-11-15 10:51:19.850623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.051 [2024-11-15 10:51:19.850631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:33.052 [2024-11-15 10:51:19.850916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:33.052 [2024-11-15 10:51:19.850925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf2d0 is same with the state(6) to be set 00:07:33.052 [2024-11-15 10:51:19.851104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac4ce0 (9): Bad file descriptor 00:07:33.052 [2024-11-15 10:51:19.852009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:33.052 task offset: 16384 on job bdev=Nvme0n1 fails 00:07:33.052 00:07:33.052 Latency(us) 00:07:33.052 [2024-11-15T10:51:19.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.052 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:33.052 Job: Nvme0n1 ended in about 0.68 seconds with error 00:07:33.052 Verification LBA range: start 0x0 length 0x400 00:07:33.052 Nvme0n1 : 0.68 1685.71 105.36 93.65 0.00 35214.10 1727.77 33363.78 00:07:33.052 [2024-11-15T10:51:19.913Z] =================================================================================================================== 00:07:33.052 [2024-11-15T10:51:19.913Z] Total : 1685.71 105.36 93.65 0.00 35214.10 1727.77 33363.78 00:07:33.052 [2024-11-15 10:51:19.853562] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.052 [2024-11-15 10:51:19.864414] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62334 00:07:34.429 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62334) - No such process 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:34.429 { 00:07:34.429 "params": { 00:07:34.429 "name": "Nvme$subsystem", 00:07:34.429 "trtype": "$TEST_TRANSPORT", 00:07:34.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:34.429 "adrfam": "ipv4", 00:07:34.429 "trsvcid": "$NVMF_PORT", 00:07:34.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:34.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:34.429 "hdgst": ${hdgst:-false}, 00:07:34.429 "ddgst": ${ddgst:-false} 00:07:34.429 }, 00:07:34.429 "method": "bdev_nvme_attach_controller" 00:07:34.429 } 00:07:34.429 EOF 00:07:34.429 )") 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:34.429 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:34.429 "params": { 00:07:34.429 "name": "Nvme0", 00:07:34.429 "trtype": "tcp", 00:07:34.429 "traddr": "10.0.0.3", 00:07:34.429 "adrfam": "ipv4", 00:07:34.429 "trsvcid": "4420", 00:07:34.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:34.429 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:34.429 "hdgst": false, 00:07:34.429 "ddgst": false 00:07:34.429 }, 00:07:34.429 "method": "bdev_nvme_attach_controller" 00:07:34.429 }' 00:07:34.429 [2024-11-15 10:51:20.913702] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:34.429 [2024-11-15 10:51:20.913806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62372 ] 00:07:34.429 [2024-11-15 10:51:21.059825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.429 [2024-11-15 10:51:21.109217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.429 [2024-11-15 10:51:21.189148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.688 Running I/O for 1 seconds... 00:07:35.631 1600.00 IOPS, 100.00 MiB/s 00:07:35.631 Latency(us) 00:07:35.631 [2024-11-15T10:51:22.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.631 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:35.631 Verification LBA range: start 0x0 length 0x400 00:07:35.631 Nvme0n1 : 1.03 1618.93 101.18 0.00 0.00 38831.40 4587.52 37415.10 00:07:35.631 [2024-11-15T10:51:22.492Z] =================================================================================================================== 00:07:35.631 [2024-11-15T10:51:22.492Z] Total : 1618.93 101.18 0.00 0.00 38831.40 4587.52 37415.10 00:07:35.890 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:35.890 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:35.891 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:35.891 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:35.891 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:35.891 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:35.891 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:35.891 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:35.891 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:35.891 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:35.891 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:35.891 rmmod nvme_tcp 00:07:35.891 rmmod nvme_fabrics 00:07:35.891 rmmod nvme_keyring 00:07:35.891 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62286 ']' 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62286 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62286 ']' 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62286 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62286 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:36.150 killing process with pid 62286 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62286' 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62286 00:07:36.150 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62286 00:07:36.150 [2024-11-15 10:51:22.978086] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:36.150 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:36.150 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:36.150 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:36.150 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:36.408 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:36.409 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.409 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.409 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:36.668 00:07:36.668 real 0m5.985s 00:07:36.668 user 0m21.436s 00:07:36.668 sys 0m1.838s 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.668 ************************************ 00:07:36.668 END TEST nvmf_host_management 00:07:36.668 ************************************ 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:36.668 ************************************ 00:07:36.668 START TEST nvmf_lvol 00:07:36.668 ************************************ 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:36.668 * Looking for test storage... 00:07:36.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.668 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.669 --rc genhtml_branch_coverage=1 00:07:36.669 --rc genhtml_function_coverage=1 00:07:36.669 --rc genhtml_legend=1 00:07:36.669 --rc geninfo_all_blocks=1 00:07:36.669 --rc geninfo_unexecuted_blocks=1 00:07:36.669 00:07:36.669 ' 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.669 --rc genhtml_branch_coverage=1 00:07:36.669 --rc genhtml_function_coverage=1 00:07:36.669 --rc genhtml_legend=1 00:07:36.669 --rc geninfo_all_blocks=1 00:07:36.669 --rc geninfo_unexecuted_blocks=1 00:07:36.669 00:07:36.669 ' 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.669 --rc genhtml_branch_coverage=1 00:07:36.669 --rc genhtml_function_coverage=1 00:07:36.669 --rc genhtml_legend=1 00:07:36.669 --rc geninfo_all_blocks=1 00:07:36.669 --rc geninfo_unexecuted_blocks=1 00:07:36.669 00:07:36.669 ' 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.669 --rc genhtml_branch_coverage=1 00:07:36.669 --rc genhtml_function_coverage=1 00:07:36.669 --rc genhtml_legend=1 00:07:36.669 --rc geninfo_all_blocks=1 00:07:36.669 --rc geninfo_unexecuted_blocks=1 00:07:36.669 00:07:36.669 ' 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.669 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:36.929 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:36.929 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:36.930 Cannot find device "nvmf_init_br" 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:36.930 Cannot find device "nvmf_init_br2" 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:36.930 Cannot find device "nvmf_tgt_br" 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:36.930 Cannot find device "nvmf_tgt_br2" 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:36.930 Cannot find device "nvmf_init_br" 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:36.930 Cannot find device "nvmf_init_br2" 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:36.930 Cannot find device "nvmf_tgt_br" 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:36.930 Cannot find device "nvmf_tgt_br2" 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:36.930 Cannot find device "nvmf_br" 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:36.930 Cannot find device "nvmf_init_if" 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:36.930 Cannot find device "nvmf_init_if2" 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:36.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:36.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:36.930 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:37.189 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:37.189 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:07:37.189 00:07:37.189 --- 10.0.0.3 ping statistics --- 00:07:37.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.189 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:37.189 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:37.189 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:07:37.189 00:07:37.189 --- 10.0.0.4 ping statistics --- 00:07:37.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.189 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:37.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:37.189 00:07:37.189 --- 10.0.0.1 ping statistics --- 00:07:37.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.189 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:37.189 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:37.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:07:37.189 00:07:37.190 --- 10.0.0.2 ping statistics --- 00:07:37.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.190 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62641 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62641 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62641 ']' 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.190 10:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:37.190 [2024-11-15 10:51:24.033599] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:37.190 [2024-11-15 10:51:24.033711] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.449 [2024-11-15 10:51:24.186781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.449 [2024-11-15 10:51:24.269722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.449 [2024-11-15 10:51:24.269794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.449 [2024-11-15 10:51:24.269808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.449 [2024-11-15 10:51:24.269819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.449 [2024-11-15 10:51:24.269828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.449 [2024-11-15 10:51:24.271376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.449 [2024-11-15 10:51:24.271510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.449 [2024-11-15 10:51:24.271522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.708 [2024-11-15 10:51:24.348079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.276 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.276 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:38.276 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:38.276 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.276 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:38.276 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.276 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:38.535 [2024-11-15 10:51:25.337467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.535 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:39.103 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:39.103 10:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:39.362 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:39.362 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:39.622 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:39.881 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5b84f446-1b7b-48be-8ebc-a725b3ee67f5 00:07:39.881 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5b84f446-1b7b-48be-8ebc-a725b3ee67f5 lvol 20 00:07:40.140 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=07f982de-7a37-420c-b8d7-17dfe476f266 00:07:40.140 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:40.399 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 07f982de-7a37-420c-b8d7-17dfe476f266 00:07:40.658 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:40.916 [2024-11-15 10:51:27.571295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:40.916 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:41.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62722 00:07:41.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:41.175 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:42.112 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 07f982de-7a37-420c-b8d7-17dfe476f266 MY_SNAPSHOT 00:07:42.371 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5ea38036-39ce-44d7-9fd3-2708b7b67ce7 00:07:42.371 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 07f982de-7a37-420c-b8d7-17dfe476f266 30 00:07:42.939 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5ea38036-39ce-44d7-9fd3-2708b7b67ce7 MY_CLONE 00:07:42.939 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ef659033-8566-4286-84d9-3867d79411f2 00:07:42.939 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate ef659033-8566-4286-84d9-3867d79411f2 00:07:43.508 10:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62722 00:07:51.658 Initializing NVMe Controllers 00:07:51.658 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:51.658 Controller IO queue size 128, less than required. 00:07:51.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:51.658 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:51.658 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:51.658 Initialization complete. Launching workers. 00:07:51.658 ======================================================== 00:07:51.658 Latency(us) 00:07:51.658 Device Information : IOPS MiB/s Average min max 00:07:51.658 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7283.60 28.45 17591.96 2303.32 103252.46 00:07:51.658 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7591.30 29.65 16864.37 4027.29 97792.94 00:07:51.658 ======================================================== 00:07:51.658 Total : 14874.90 58.11 17220.64 2303.32 103252.46 00:07:51.658 00:07:51.658 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:51.917 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 07f982de-7a37-420c-b8d7-17dfe476f266 00:07:51.917 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5b84f446-1b7b-48be-8ebc-a725b3ee67f5 00:07:52.177 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:52.177 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:52.177 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:52.177 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:52.177 10:51:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:52.177 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:52.177 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:52.177 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:52.177 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:52.436 rmmod nvme_tcp 00:07:52.436 rmmod nvme_fabrics 00:07:52.436 rmmod nvme_keyring 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62641 ']' 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62641 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62641 ']' 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62641 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62641 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.436 killing process with pid 62641 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62641' 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62641 00:07:52.436 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62641 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:52.695 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:52.696 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:52.696 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:52.955 00:07:52.955 real 0m16.323s 00:07:52.955 user 1m6.950s 00:07:52.955 sys 0m3.968s 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.955 ************************************ 00:07:52.955 END TEST nvmf_lvol 00:07:52.955 ************************************ 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.955 ************************************ 00:07:52.955 START TEST nvmf_lvs_grow 00:07:52.955 ************************************ 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:52.955 * Looking for test storage... 00:07:52.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.955 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:53.215 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:53.215 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.215 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.215 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.215 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.215 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.215 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.215 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.215 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:53.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.216 --rc genhtml_branch_coverage=1 00:07:53.216 --rc genhtml_function_coverage=1 00:07:53.216 --rc genhtml_legend=1 00:07:53.216 --rc geninfo_all_blocks=1 00:07:53.216 --rc geninfo_unexecuted_blocks=1 00:07:53.216 00:07:53.216 ' 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:53.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.216 --rc genhtml_branch_coverage=1 00:07:53.216 --rc genhtml_function_coverage=1 00:07:53.216 --rc genhtml_legend=1 00:07:53.216 --rc geninfo_all_blocks=1 00:07:53.216 --rc geninfo_unexecuted_blocks=1 00:07:53.216 00:07:53.216 ' 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:53.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.216 --rc genhtml_branch_coverage=1 00:07:53.216 --rc genhtml_function_coverage=1 00:07:53.216 --rc genhtml_legend=1 00:07:53.216 --rc geninfo_all_blocks=1 00:07:53.216 --rc geninfo_unexecuted_blocks=1 00:07:53.216 00:07:53.216 ' 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:53.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.216 --rc genhtml_branch_coverage=1 00:07:53.216 --rc genhtml_function_coverage=1 00:07:53.216 --rc genhtml_legend=1 00:07:53.216 --rc geninfo_all_blocks=1 00:07:53.216 --rc geninfo_unexecuted_blocks=1 00:07:53.216 00:07:53.216 ' 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.216 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.216 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:53.217 Cannot find device "nvmf_init_br" 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:53.217 Cannot find device "nvmf_init_br2" 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:53.217 Cannot find device "nvmf_tgt_br" 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.217 Cannot find device "nvmf_tgt_br2" 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:53.217 Cannot find device "nvmf_init_br" 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:53.217 Cannot find device "nvmf_init_br2" 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:53.217 Cannot find device "nvmf_tgt_br" 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:53.217 Cannot find device "nvmf_tgt_br2" 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:53.217 Cannot find device "nvmf_br" 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:53.217 10:51:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:53.217 Cannot find device "nvmf_init_if" 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:53.217 Cannot find device "nvmf_init_if2" 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:53.217 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:53.477 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.477 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:07:53.477 00:07:53.477 --- 10.0.0.3 ping statistics --- 00:07:53.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.477 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:53.477 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:53.477 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:07:53.477 00:07:53.477 --- 10.0.0.4 ping statistics --- 00:07:53.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.477 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:53.477 00:07:53.477 --- 10.0.0.1 ping statistics --- 00:07:53.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.477 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:53.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:07:53.477 00:07:53.477 --- 10.0.0.2 ping statistics --- 00:07:53.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.477 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63097 00:07:53.477 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:53.478 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63097 00:07:53.478 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63097 ']' 00:07:53.478 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.478 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.478 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.478 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.478 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.736 [2024-11-15 10:51:40.356318] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:53.736 [2024-11-15 10:51:40.356397] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.736 [2024-11-15 10:51:40.508680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.736 [2024-11-15 10:51:40.582650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.736 [2024-11-15 10:51:40.582718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.736 [2024-11-15 10:51:40.582732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.736 [2024-11-15 10:51:40.582743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.736 [2024-11-15 10:51:40.582753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.736 [2024-11-15 10:51:40.583262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.996 [2024-11-15 10:51:40.657373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.564 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.564 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:54.564 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:54.564 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:54.564 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.564 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.564 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:54.824 [2024-11-15 10:51:41.591812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.824 ************************************ 00:07:54.824 START TEST lvs_grow_clean 00:07:54.824 ************************************ 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:54.824 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:55.083 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:55.083 10:51:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:55.650 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=10dbcb56-c731-41b3-baf7-123453a986f5 00:07:55.650 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10dbcb56-c731-41b3-baf7-123453a986f5 00:07:55.650 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:55.650 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:55.650 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:55.650 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 10dbcb56-c731-41b3-baf7-123453a986f5 lvol 150 00:07:55.910 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dec1d5e3-6390-4e16-8512-a3837bed80a1 00:07:55.910 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:55.910 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:56.169 [2024-11-15 10:51:42.921416] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:56.170 [2024-11-15 10:51:42.921521] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:56.170 true 00:07:56.170 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10dbcb56-c731-41b3-baf7-123453a986f5 00:07:56.170 10:51:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:56.428 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:56.428 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:56.686 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dec1d5e3-6390-4e16-8512-a3837bed80a1 00:07:56.943 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:57.202 [2024-11-15 10:51:43.841893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:57.202 10:51:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:57.460 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63174 00:07:57.460 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:57.460 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:57.460 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63174 /var/tmp/bdevperf.sock 00:07:57.460 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63174 ']' 00:07:57.460 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:57.460 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:57.460 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:57.460 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.460 10:51:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:57.460 [2024-11-15 10:51:44.174994] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:57.460 [2024-11-15 10:51:44.175072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63174 ] 00:07:57.718 [2024-11-15 10:51:44.325800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.718 [2024-11-15 10:51:44.386136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.718 [2024-11-15 10:51:44.447195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.653 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.653 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:58.653 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:58.653 Nvme0n1 00:07:58.653 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:58.912 [ 00:07:58.912 { 00:07:58.912 "name": "Nvme0n1", 00:07:58.912 "aliases": [ 00:07:58.912 "dec1d5e3-6390-4e16-8512-a3837bed80a1" 00:07:58.912 ], 00:07:58.912 "product_name": "NVMe disk", 00:07:58.912 "block_size": 4096, 00:07:58.912 "num_blocks": 38912, 00:07:58.912 "uuid": "dec1d5e3-6390-4e16-8512-a3837bed80a1", 00:07:58.912 "numa_id": -1, 00:07:58.912 "assigned_rate_limits": { 00:07:58.912 "rw_ios_per_sec": 0, 00:07:58.912 "rw_mbytes_per_sec": 0, 00:07:58.912 "r_mbytes_per_sec": 0, 00:07:58.912 "w_mbytes_per_sec": 0 00:07:58.912 }, 00:07:58.912 "claimed": false, 00:07:58.912 "zoned": false, 00:07:58.912 "supported_io_types": { 00:07:58.912 "read": true, 00:07:58.912 "write": true, 00:07:58.912 "unmap": true, 00:07:58.912 "flush": true, 00:07:58.912 "reset": true, 00:07:58.912 "nvme_admin": true, 00:07:58.912 "nvme_io": true, 00:07:58.912 "nvme_io_md": false, 00:07:58.912 "write_zeroes": true, 00:07:58.912 "zcopy": false, 00:07:58.912 "get_zone_info": false, 00:07:58.912 "zone_management": false, 00:07:58.912 "zone_append": false, 00:07:58.912 "compare": true, 00:07:58.912 "compare_and_write": true, 00:07:58.912 "abort": true, 00:07:58.912 "seek_hole": false, 00:07:58.912 "seek_data": false, 00:07:58.912 "copy": true, 00:07:58.912 "nvme_iov_md": false 00:07:58.912 }, 00:07:58.912 "memory_domains": [ 00:07:58.912 { 00:07:58.912 "dma_device_id": "system", 00:07:58.912 "dma_device_type": 1 00:07:58.912 } 00:07:58.912 ], 00:07:58.912 "driver_specific": { 00:07:58.912 "nvme": [ 00:07:58.912 { 00:07:58.912 "trid": { 00:07:58.912 "trtype": "TCP", 00:07:58.912 "adrfam": "IPv4", 00:07:58.912 "traddr": "10.0.0.3", 00:07:58.912 "trsvcid": "4420", 00:07:58.912 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:58.912 }, 00:07:58.912 "ctrlr_data": { 00:07:58.912 "cntlid": 1, 00:07:58.912 "vendor_id": "0x8086", 00:07:58.912 "model_number": "SPDK bdev Controller", 00:07:58.912 "serial_number": "SPDK0", 00:07:58.912 "firmware_revision": "25.01", 00:07:58.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:58.912 "oacs": { 00:07:58.912 "security": 0, 00:07:58.912 "format": 0, 00:07:58.912 "firmware": 0, 00:07:58.912 "ns_manage": 0 00:07:58.912 }, 00:07:58.912 "multi_ctrlr": true, 00:07:58.912 "ana_reporting": false 00:07:58.912 }, 00:07:58.912 "vs": { 00:07:58.912 "nvme_version": "1.3" 00:07:58.912 }, 00:07:58.912 "ns_data": { 00:07:58.912 "id": 1, 00:07:58.912 "can_share": true 00:07:58.912 } 00:07:58.912 } 00:07:58.912 ], 00:07:58.912 "mp_policy": "active_passive" 00:07:58.912 } 00:07:58.912 } 00:07:58.912 ] 00:07:58.912 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63203 00:07:58.912 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:58.912 10:51:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:59.170 Running I/O for 10 seconds... 00:08:00.107 Latency(us) 00:08:00.107 [2024-11-15T10:51:46.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.107 Nvme0n1 : 1.00 6752.00 26.38 0.00 0.00 0.00 0.00 0.00 00:08:00.107 [2024-11-15T10:51:46.968Z] =================================================================================================================== 00:08:00.107 [2024-11-15T10:51:46.968Z] Total : 6752.00 26.38 0.00 0.00 0.00 0.00 0.00 00:08:00.107 00:08:01.046 10:51:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 10dbcb56-c731-41b3-baf7-123453a986f5 00:08:01.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.046 Nvme0n1 : 2.00 6805.00 26.58 0.00 0.00 0.00 0.00 0.00 00:08:01.046 [2024-11-15T10:51:47.907Z] =================================================================================================================== 00:08:01.046 [2024-11-15T10:51:47.907Z] Total : 6805.00 26.58 0.00 0.00 0.00 0.00 0.00 00:08:01.046 00:08:01.306 true 00:08:01.306 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10dbcb56-c731-41b3-baf7-123453a986f5 00:08:01.306 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:01.565 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:01.565 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:01.565 10:51:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63203 00:08:02.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.132 Nvme0n1 : 3.00 6862.67 26.81 0.00 0.00 0.00 0.00 0.00 00:08:02.132 [2024-11-15T10:51:48.993Z] =================================================================================================================== 00:08:02.132 [2024-11-15T10:51:48.993Z] Total : 6862.67 26.81 0.00 0.00 0.00 0.00 0.00 00:08:02.132 00:08:03.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.069 Nvme0n1 : 4.00 6925.00 27.05 0.00 0.00 0.00 0.00 0.00 00:08:03.069 [2024-11-15T10:51:49.930Z] =================================================================================================================== 00:08:03.069 [2024-11-15T10:51:49.930Z] Total : 6925.00 27.05 0.00 0.00 0.00 0.00 0.00 00:08:03.069 00:08:04.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.018 Nvme0n1 : 5.00 6937.00 27.10 0.00 0.00 0.00 0.00 0.00 00:08:04.018 [2024-11-15T10:51:50.879Z] =================================================================================================================== 00:08:04.018 [2024-11-15T10:51:50.879Z] Total : 6937.00 27.10 0.00 0.00 0.00 0.00 0.00 00:08:04.018 00:08:05.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.396 Nvme0n1 : 6.00 6855.50 26.78 0.00 0.00 0.00 0.00 0.00 00:08:05.396 [2024-11-15T10:51:52.257Z] =================================================================================================================== 00:08:05.396 [2024-11-15T10:51:52.257Z] Total : 6855.50 26.78 0.00 0.00 0.00 0.00 0.00 00:08:05.396 00:08:06.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.333 Nvme0n1 : 7.00 6837.71 26.71 0.00 0.00 0.00 0.00 0.00 00:08:06.333 [2024-11-15T10:51:53.194Z] =================================================================================================================== 00:08:06.333 [2024-11-15T10:51:53.194Z] Total : 6837.71 26.71 0.00 0.00 0.00 0.00 0.00 00:08:06.333 00:08:07.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.271 Nvme0n1 : 8.00 6840.25 26.72 0.00 0.00 0.00 0.00 0.00 00:08:07.271 [2024-11-15T10:51:54.132Z] =================================================================================================================== 00:08:07.271 [2024-11-15T10:51:54.132Z] Total : 6840.25 26.72 0.00 0.00 0.00 0.00 0.00 00:08:07.271 00:08:08.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.208 Nvme0n1 : 9.00 6856.33 26.78 0.00 0.00 0.00 0.00 0.00 00:08:08.208 [2024-11-15T10:51:55.069Z] =================================================================================================================== 00:08:08.208 [2024-11-15T10:51:55.069Z] Total : 6856.33 26.78 0.00 0.00 0.00 0.00 0.00 00:08:08.208 00:08:09.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.147 Nvme0n1 : 10.00 6843.80 26.73 0.00 0.00 0.00 0.00 0.00 00:08:09.147 [2024-11-15T10:51:56.008Z] =================================================================================================================== 00:08:09.147 [2024-11-15T10:51:56.008Z] Total : 6843.80 26.73 0.00 0.00 0.00 0.00 0.00 00:08:09.147 00:08:09.147 00:08:09.147 Latency(us) 00:08:09.147 [2024-11-15T10:51:56.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.147 Nvme0n1 : 10.01 6846.45 26.74 0.00 0.00 18690.47 6166.34 101997.85 00:08:09.147 [2024-11-15T10:51:56.008Z] =================================================================================================================== 00:08:09.147 [2024-11-15T10:51:56.008Z] Total : 6846.45 26.74 0.00 0.00 18690.47 6166.34 101997.85 00:08:09.147 { 00:08:09.147 "results": [ 00:08:09.147 { 00:08:09.147 "job": "Nvme0n1", 00:08:09.147 "core_mask": "0x2", 00:08:09.147 "workload": "randwrite", 00:08:09.147 "status": "finished", 00:08:09.147 "queue_depth": 128, 00:08:09.147 "io_size": 4096, 00:08:09.147 "runtime": 10.014821, 00:08:09.147 "iops": 6846.452872198115, 00:08:09.147 "mibps": 26.743956532023887, 00:08:09.147 "io_failed": 0, 00:08:09.147 "io_timeout": 0, 00:08:09.147 "avg_latency_us": 18690.473128637834, 00:08:09.147 "min_latency_us": 6166.341818181818, 00:08:09.147 "max_latency_us": 101997.84727272727 00:08:09.147 } 00:08:09.147 ], 00:08:09.147 "core_count": 1 00:08:09.147 } 00:08:09.147 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63174 00:08:09.147 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63174 ']' 00:08:09.147 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63174 00:08:09.147 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:09.147 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.147 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63174 00:08:09.147 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:09.147 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:09.147 killing process with pid 63174 00:08:09.147 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63174' 00:08:09.147 Received shutdown signal, test time was about 10.000000 seconds 00:08:09.147 00:08:09.147 Latency(us) 00:08:09.147 [2024-11-15T10:51:56.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.147 [2024-11-15T10:51:56.008Z] =================================================================================================================== 00:08:09.147 [2024-11-15T10:51:56.008Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:09.147 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63174 00:08:09.147 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63174 00:08:09.406 10:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:09.665 10:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:09.925 10:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10dbcb56-c731-41b3-baf7-123453a986f5 00:08:09.925 10:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:10.183 10:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:10.183 10:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:10.183 10:51:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.443 [2024-11-15 10:51:57.132088] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10dbcb56-c731-41b3-baf7-123453a986f5 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10dbcb56-c731-41b3-baf7-123453a986f5 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:10.443 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10dbcb56-c731-41b3-baf7-123453a986f5 00:08:10.702 request: 00:08:10.702 { 00:08:10.702 "uuid": "10dbcb56-c731-41b3-baf7-123453a986f5", 00:08:10.702 "method": "bdev_lvol_get_lvstores", 00:08:10.702 "req_id": 1 00:08:10.702 } 00:08:10.702 Got JSON-RPC error response 00:08:10.702 response: 00:08:10.702 { 00:08:10.702 "code": -19, 00:08:10.702 "message": "No such device" 00:08:10.702 } 00:08:10.702 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:10.702 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:10.702 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:10.702 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:10.702 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.961 aio_bdev 00:08:10.961 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dec1d5e3-6390-4e16-8512-a3837bed80a1 00:08:10.961 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=dec1d5e3-6390-4e16-8512-a3837bed80a1 00:08:10.962 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.962 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:10.962 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.962 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.962 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:11.220 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dec1d5e3-6390-4e16-8512-a3837bed80a1 -t 2000 00:08:11.480 [ 00:08:11.480 { 00:08:11.480 "name": "dec1d5e3-6390-4e16-8512-a3837bed80a1", 00:08:11.480 "aliases": [ 00:08:11.480 "lvs/lvol" 00:08:11.480 ], 00:08:11.480 "product_name": "Logical Volume", 00:08:11.480 "block_size": 4096, 00:08:11.480 "num_blocks": 38912, 00:08:11.480 "uuid": "dec1d5e3-6390-4e16-8512-a3837bed80a1", 00:08:11.480 "assigned_rate_limits": { 00:08:11.480 "rw_ios_per_sec": 0, 00:08:11.480 "rw_mbytes_per_sec": 0, 00:08:11.480 "r_mbytes_per_sec": 0, 00:08:11.480 "w_mbytes_per_sec": 0 00:08:11.480 }, 00:08:11.480 "claimed": false, 00:08:11.480 "zoned": false, 00:08:11.480 "supported_io_types": { 00:08:11.480 "read": true, 00:08:11.480 "write": true, 00:08:11.480 "unmap": true, 00:08:11.480 "flush": false, 00:08:11.480 "reset": true, 00:08:11.480 "nvme_admin": false, 00:08:11.480 "nvme_io": false, 00:08:11.480 "nvme_io_md": false, 00:08:11.480 "write_zeroes": true, 00:08:11.480 "zcopy": false, 00:08:11.480 "get_zone_info": false, 00:08:11.480 "zone_management": false, 00:08:11.480 "zone_append": false, 00:08:11.480 "compare": false, 00:08:11.480 "compare_and_write": false, 00:08:11.480 "abort": false, 00:08:11.480 "seek_hole": true, 00:08:11.480 "seek_data": true, 00:08:11.480 "copy": false, 00:08:11.480 "nvme_iov_md": false 00:08:11.480 }, 00:08:11.480 "driver_specific": { 00:08:11.480 "lvol": { 00:08:11.480 "lvol_store_uuid": "10dbcb56-c731-41b3-baf7-123453a986f5", 00:08:11.480 "base_bdev": "aio_bdev", 00:08:11.480 "thin_provision": false, 00:08:11.480 "num_allocated_clusters": 38, 00:08:11.480 "snapshot": false, 00:08:11.480 "clone": false, 00:08:11.480 "esnap_clone": false 00:08:11.480 } 00:08:11.480 } 00:08:11.480 } 00:08:11.480 ] 00:08:11.480 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:11.480 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:11.480 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10dbcb56-c731-41b3-baf7-123453a986f5 00:08:11.739 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:11.739 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 10dbcb56-c731-41b3-baf7-123453a986f5 00:08:11.739 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:11.998 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:11.998 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete dec1d5e3-6390-4e16-8512-a3837bed80a1 00:08:11.998 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 10dbcb56-c731-41b3-baf7-123453a986f5 00:08:12.566 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:12.825 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.084 ************************************ 00:08:13.084 END TEST lvs_grow_clean 00:08:13.084 ************************************ 00:08:13.084 00:08:13.084 real 0m18.211s 00:08:13.084 user 0m17.190s 00:08:13.084 sys 0m2.600s 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.084 ************************************ 00:08:13.084 START TEST lvs_grow_dirty 00:08:13.084 ************************************ 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.084 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:13.652 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:13.652 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:13.911 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:13.911 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:13.911 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:14.170 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:14.170 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:14.170 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 lvol 150 00:08:14.429 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9a184eac-b5b1-41fa-9713-00b230782d09 00:08:14.429 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:14.429 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:14.429 [2024-11-15 10:52:01.275430] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:14.429 [2024-11-15 10:52:01.275558] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:14.429 true 00:08:14.688 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:14.688 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:14.688 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:14.688 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:14.946 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9a184eac-b5b1-41fa-9713-00b230782d09 00:08:15.205 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:15.464 [2024-11-15 10:52:02.167957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:15.464 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:15.735 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63445 00:08:15.735 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:15.735 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.735 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63445 /var/tmp/bdevperf.sock 00:08:15.735 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63445 ']' 00:08:15.735 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:15.735 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:15.736 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:15.736 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.736 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:15.736 [2024-11-15 10:52:02.473972] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:15.736 [2024-11-15 10:52:02.474055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63445 ] 00:08:16.002 [2024-11-15 10:52:02.614518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.002 [2024-11-15 10:52:02.670456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.002 [2024-11-15 10:52:02.720275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.596 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.596 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:16.596 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:16.862 Nvme0n1 00:08:16.862 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:17.121 [ 00:08:17.121 { 00:08:17.121 "name": "Nvme0n1", 00:08:17.121 "aliases": [ 00:08:17.121 "9a184eac-b5b1-41fa-9713-00b230782d09" 00:08:17.121 ], 00:08:17.121 "product_name": "NVMe disk", 00:08:17.121 "block_size": 4096, 00:08:17.121 "num_blocks": 38912, 00:08:17.121 "uuid": "9a184eac-b5b1-41fa-9713-00b230782d09", 00:08:17.121 "numa_id": -1, 00:08:17.121 "assigned_rate_limits": { 00:08:17.121 "rw_ios_per_sec": 0, 00:08:17.121 "rw_mbytes_per_sec": 0, 00:08:17.121 "r_mbytes_per_sec": 0, 00:08:17.121 "w_mbytes_per_sec": 0 00:08:17.121 }, 00:08:17.121 "claimed": false, 00:08:17.121 "zoned": false, 00:08:17.121 "supported_io_types": { 00:08:17.121 "read": true, 00:08:17.121 "write": true, 00:08:17.121 "unmap": true, 00:08:17.121 "flush": true, 00:08:17.121 "reset": true, 00:08:17.121 "nvme_admin": true, 00:08:17.121 "nvme_io": true, 00:08:17.121 "nvme_io_md": false, 00:08:17.121 "write_zeroes": true, 00:08:17.121 "zcopy": false, 00:08:17.121 "get_zone_info": false, 00:08:17.121 "zone_management": false, 00:08:17.121 "zone_append": false, 00:08:17.121 "compare": true, 00:08:17.121 "compare_and_write": true, 00:08:17.121 "abort": true, 00:08:17.121 "seek_hole": false, 00:08:17.121 "seek_data": false, 00:08:17.121 "copy": true, 00:08:17.121 "nvme_iov_md": false 00:08:17.121 }, 00:08:17.121 "memory_domains": [ 00:08:17.121 { 00:08:17.121 "dma_device_id": "system", 00:08:17.121 "dma_device_type": 1 00:08:17.121 } 00:08:17.121 ], 00:08:17.121 "driver_specific": { 00:08:17.121 "nvme": [ 00:08:17.121 { 00:08:17.121 "trid": { 00:08:17.121 "trtype": "TCP", 00:08:17.121 "adrfam": "IPv4", 00:08:17.121 "traddr": "10.0.0.3", 00:08:17.121 "trsvcid": "4420", 00:08:17.121 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:17.121 }, 00:08:17.121 "ctrlr_data": { 00:08:17.121 "cntlid": 1, 00:08:17.121 "vendor_id": "0x8086", 00:08:17.121 "model_number": "SPDK bdev Controller", 00:08:17.121 "serial_number": "SPDK0", 00:08:17.121 "firmware_revision": "25.01", 00:08:17.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:17.121 "oacs": { 00:08:17.121 "security": 0, 00:08:17.121 "format": 0, 00:08:17.121 "firmware": 0, 00:08:17.121 "ns_manage": 0 00:08:17.121 }, 00:08:17.121 "multi_ctrlr": true, 00:08:17.121 "ana_reporting": false 00:08:17.121 }, 00:08:17.121 "vs": { 00:08:17.121 "nvme_version": "1.3" 00:08:17.121 }, 00:08:17.121 "ns_data": { 00:08:17.121 "id": 1, 00:08:17.121 "can_share": true 00:08:17.121 } 00:08:17.121 } 00:08:17.121 ], 00:08:17.122 "mp_policy": "active_passive" 00:08:17.122 } 00:08:17.122 } 00:08:17.122 ] 00:08:17.122 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.122 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63474 00:08:17.122 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:17.381 Running I/O for 10 seconds... 00:08:18.319 Latency(us) 00:08:18.319 [2024-11-15T10:52:05.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.319 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:18.319 [2024-11-15T10:52:05.180Z] =================================================================================================================== 00:08:18.319 [2024-11-15T10:52:05.180Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:18.319 00:08:19.256 10:52:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:19.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.256 Nvme0n1 : 2.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:19.256 [2024-11-15T10:52:06.117Z] =================================================================================================================== 00:08:19.256 [2024-11-15T10:52:06.117Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:19.256 00:08:19.515 true 00:08:19.515 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:19.515 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:19.774 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:19.774 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:19.774 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63474 00:08:20.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.343 Nvme0n1 : 3.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:20.343 [2024-11-15T10:52:07.204Z] =================================================================================================================== 00:08:20.343 [2024-11-15T10:52:07.204Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:20.343 00:08:21.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.279 Nvme0n1 : 4.00 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:08:21.279 [2024-11-15T10:52:08.140Z] =================================================================================================================== 00:08:21.279 [2024-11-15T10:52:08.140Z] Total : 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:08:21.279 00:08:22.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.216 Nvme0n1 : 5.00 6781.80 26.49 0.00 0.00 0.00 0.00 0.00 00:08:22.216 [2024-11-15T10:52:09.077Z] =================================================================================================================== 00:08:22.216 [2024-11-15T10:52:09.077Z] Total : 6781.80 26.49 0.00 0.00 0.00 0.00 0.00 00:08:22.216 00:08:23.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.153 Nvme0n1 : 6.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:23.153 [2024-11-15T10:52:10.014Z] =================================================================================================================== 00:08:23.153 [2024-11-15T10:52:10.014Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:23.153 00:08:24.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.530 Nvme0n1 : 7.00 6543.29 25.56 0.00 0.00 0.00 0.00 0.00 00:08:24.530 [2024-11-15T10:52:11.391Z] =================================================================================================================== 00:08:24.530 [2024-11-15T10:52:11.391Z] Total : 6543.29 25.56 0.00 0.00 0.00 0.00 0.00 00:08:24.530 00:08:25.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.467 Nvme0n1 : 8.00 6487.38 25.34 0.00 0.00 0.00 0.00 0.00 00:08:25.467 [2024-11-15T10:52:12.328Z] =================================================================================================================== 00:08:25.467 [2024-11-15T10:52:12.328Z] Total : 6487.38 25.34 0.00 0.00 0.00 0.00 0.00 00:08:25.467 00:08:26.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.404 Nvme0n1 : 9.00 6443.89 25.17 0.00 0.00 0.00 0.00 0.00 00:08:26.404 [2024-11-15T10:52:13.265Z] =================================================================================================================== 00:08:26.404 [2024-11-15T10:52:13.265Z] Total : 6443.89 25.17 0.00 0.00 0.00 0.00 0.00 00:08:26.404 00:08:27.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.342 Nvme0n1 : 10.00 6421.80 25.09 0.00 0.00 0.00 0.00 0.00 00:08:27.342 [2024-11-15T10:52:14.203Z] =================================================================================================================== 00:08:27.342 [2024-11-15T10:52:14.203Z] Total : 6421.80 25.09 0.00 0.00 0.00 0.00 0.00 00:08:27.342 00:08:27.342 00:08:27.342 Latency(us) 00:08:27.342 [2024-11-15T10:52:14.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.342 Nvme0n1 : 10.01 6428.72 25.11 0.00 0.00 19906.49 10783.65 216387.96 00:08:27.342 [2024-11-15T10:52:14.203Z] =================================================================================================================== 00:08:27.342 [2024-11-15T10:52:14.203Z] Total : 6428.72 25.11 0.00 0.00 19906.49 10783.65 216387.96 00:08:27.342 { 00:08:27.342 "results": [ 00:08:27.342 { 00:08:27.342 "job": "Nvme0n1", 00:08:27.342 "core_mask": "0x2", 00:08:27.342 "workload": "randwrite", 00:08:27.342 "status": "finished", 00:08:27.342 "queue_depth": 128, 00:08:27.342 "io_size": 4096, 00:08:27.342 "runtime": 10.009152, 00:08:27.342 "iops": 6428.716438715288, 00:08:27.342 "mibps": 25.112173588731594, 00:08:27.342 "io_failed": 0, 00:08:27.342 "io_timeout": 0, 00:08:27.342 "avg_latency_us": 19906.491251670654, 00:08:27.342 "min_latency_us": 10783.65090909091, 00:08:27.342 "max_latency_us": 216387.95636363636 00:08:27.342 } 00:08:27.342 ], 00:08:27.342 "core_count": 1 00:08:27.342 } 00:08:27.342 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63445 00:08:27.342 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63445 ']' 00:08:27.342 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63445 00:08:27.342 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:27.342 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.342 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63445 00:08:27.342 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:27.342 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:27.342 killing process with pid 63445 00:08:27.342 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63445' 00:08:27.342 Received shutdown signal, test time was about 10.000000 seconds 00:08:27.342 00:08:27.342 Latency(us) 00:08:27.342 [2024-11-15T10:52:14.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.342 [2024-11-15T10:52:14.203Z] =================================================================================================================== 00:08:27.342 [2024-11-15T10:52:14.203Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:27.342 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63445 00:08:27.342 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63445 00:08:27.601 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:27.861 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:28.119 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:28.119 10:52:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63097 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63097 00:08:28.384 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63097 Killed "${NVMF_APP[@]}" "$@" 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63607 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63607 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63607 ']' 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.384 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.384 [2024-11-15 10:52:15.105366] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:28.384 [2024-11-15 10:52:15.106250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.672 [2024-11-15 10:52:15.255519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.672 [2024-11-15 10:52:15.309130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.672 [2024-11-15 10:52:15.309217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.672 [2024-11-15 10:52:15.309228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.672 [2024-11-15 10:52:15.309236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.672 [2024-11-15 10:52:15.309243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.672 [2024-11-15 10:52:15.309695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.672 [2024-11-15 10:52:15.380768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.672 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.672 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:28.672 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.672 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.672 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.672 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.672 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.943 [2024-11-15 10:52:15.764356] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:28.943 [2024-11-15 10:52:15.765391] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:28.943 [2024-11-15 10:52:15.765718] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:29.202 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:29.202 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9a184eac-b5b1-41fa-9713-00b230782d09 00:08:29.202 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9a184eac-b5b1-41fa-9713-00b230782d09 00:08:29.202 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.202 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:29.202 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.202 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.202 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:29.462 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a184eac-b5b1-41fa-9713-00b230782d09 -t 2000 00:08:29.462 [ 00:08:29.462 { 00:08:29.462 "name": "9a184eac-b5b1-41fa-9713-00b230782d09", 00:08:29.462 "aliases": [ 00:08:29.462 "lvs/lvol" 00:08:29.462 ], 00:08:29.462 "product_name": "Logical Volume", 00:08:29.462 "block_size": 4096, 00:08:29.462 "num_blocks": 38912, 00:08:29.462 "uuid": "9a184eac-b5b1-41fa-9713-00b230782d09", 00:08:29.462 "assigned_rate_limits": { 00:08:29.462 "rw_ios_per_sec": 0, 00:08:29.462 "rw_mbytes_per_sec": 0, 00:08:29.462 "r_mbytes_per_sec": 0, 00:08:29.462 "w_mbytes_per_sec": 0 00:08:29.462 }, 00:08:29.462 "claimed": false, 00:08:29.462 "zoned": false, 00:08:29.462 "supported_io_types": { 00:08:29.462 "read": true, 00:08:29.462 "write": true, 00:08:29.462 "unmap": true, 00:08:29.462 "flush": false, 00:08:29.462 "reset": true, 00:08:29.462 "nvme_admin": false, 00:08:29.462 "nvme_io": false, 00:08:29.462 "nvme_io_md": false, 00:08:29.462 "write_zeroes": true, 00:08:29.462 "zcopy": false, 00:08:29.462 "get_zone_info": false, 00:08:29.462 "zone_management": false, 00:08:29.462 "zone_append": false, 00:08:29.462 "compare": false, 00:08:29.462 "compare_and_write": false, 00:08:29.462 "abort": false, 00:08:29.462 "seek_hole": true, 00:08:29.462 "seek_data": true, 00:08:29.462 "copy": false, 00:08:29.462 "nvme_iov_md": false 00:08:29.462 }, 00:08:29.462 "driver_specific": { 00:08:29.462 "lvol": { 00:08:29.462 "lvol_store_uuid": "5c6b1ec3-4fe7-4cad-ae0a-673219c11598", 00:08:29.462 "base_bdev": "aio_bdev", 00:08:29.462 "thin_provision": false, 00:08:29.462 "num_allocated_clusters": 38, 00:08:29.462 "snapshot": false, 00:08:29.462 "clone": false, 00:08:29.462 "esnap_clone": false 00:08:29.462 } 00:08:29.462 } 00:08:29.462 } 00:08:29.462 ] 00:08:29.721 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:29.721 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:29.721 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:29.721 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:29.721 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:29.721 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:29.981 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:29.981 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.240 [2024-11-15 10:52:17.081792] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:30.499 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:30.499 request: 00:08:30.499 { 00:08:30.499 "uuid": "5c6b1ec3-4fe7-4cad-ae0a-673219c11598", 00:08:30.499 "method": "bdev_lvol_get_lvstores", 00:08:30.499 "req_id": 1 00:08:30.499 } 00:08:30.499 Got JSON-RPC error response 00:08:30.499 response: 00:08:30.499 { 00:08:30.499 "code": -19, 00:08:30.499 "message": "No such device" 00:08:30.499 } 00:08:30.758 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:30.758 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.758 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:30.758 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.758 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.016 aio_bdev 00:08:31.016 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9a184eac-b5b1-41fa-9713-00b230782d09 00:08:31.016 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9a184eac-b5b1-41fa-9713-00b230782d09 00:08:31.016 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.016 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:31.016 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.016 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.016 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:31.275 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9a184eac-b5b1-41fa-9713-00b230782d09 -t 2000 00:08:31.534 [ 00:08:31.534 { 00:08:31.534 "name": "9a184eac-b5b1-41fa-9713-00b230782d09", 00:08:31.534 "aliases": [ 00:08:31.534 "lvs/lvol" 00:08:31.534 ], 00:08:31.534 "product_name": "Logical Volume", 00:08:31.534 "block_size": 4096, 00:08:31.534 "num_blocks": 38912, 00:08:31.534 "uuid": "9a184eac-b5b1-41fa-9713-00b230782d09", 00:08:31.534 "assigned_rate_limits": { 00:08:31.534 "rw_ios_per_sec": 0, 00:08:31.534 "rw_mbytes_per_sec": 0, 00:08:31.534 "r_mbytes_per_sec": 0, 00:08:31.534 "w_mbytes_per_sec": 0 00:08:31.534 }, 00:08:31.534 "claimed": false, 00:08:31.534 "zoned": false, 00:08:31.534 "supported_io_types": { 00:08:31.534 "read": true, 00:08:31.534 "write": true, 00:08:31.535 "unmap": true, 00:08:31.535 "flush": false, 00:08:31.535 "reset": true, 00:08:31.535 "nvme_admin": false, 00:08:31.535 "nvme_io": false, 00:08:31.535 "nvme_io_md": false, 00:08:31.535 "write_zeroes": true, 00:08:31.535 "zcopy": false, 00:08:31.535 "get_zone_info": false, 00:08:31.535 "zone_management": false, 00:08:31.535 "zone_append": false, 00:08:31.535 "compare": false, 00:08:31.535 "compare_and_write": false, 00:08:31.535 "abort": false, 00:08:31.535 "seek_hole": true, 00:08:31.535 "seek_data": true, 00:08:31.535 "copy": false, 00:08:31.535 "nvme_iov_md": false 00:08:31.535 }, 00:08:31.535 "driver_specific": { 00:08:31.535 "lvol": { 00:08:31.535 "lvol_store_uuid": "5c6b1ec3-4fe7-4cad-ae0a-673219c11598", 00:08:31.535 "base_bdev": "aio_bdev", 00:08:31.535 "thin_provision": false, 00:08:31.535 "num_allocated_clusters": 38, 00:08:31.535 "snapshot": false, 00:08:31.535 "clone": false, 00:08:31.535 "esnap_clone": false 00:08:31.535 } 00:08:31.535 } 00:08:31.535 } 00:08:31.535 ] 00:08:31.535 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:31.535 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:31.535 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:31.794 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:31.794 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:31.794 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:32.053 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:32.053 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9a184eac-b5b1-41fa-9713-00b230782d09 00:08:32.053 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c6b1ec3-4fe7-4cad-ae0a-673219c11598 00:08:32.621 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.621 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:33.188 00:08:33.188 real 0m19.927s 00:08:33.188 user 0m41.021s 00:08:33.188 sys 0m9.369s 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:33.188 ************************************ 00:08:33.188 END TEST lvs_grow_dirty 00:08:33.188 ************************************ 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:33.188 nvmf_trace.0 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.188 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:33.447 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.447 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:33.447 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.447 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.447 rmmod nvme_tcp 00:08:33.447 rmmod nvme_fabrics 00:08:33.447 rmmod nvme_keyring 00:08:33.447 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63607 ']' 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63607 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63607 ']' 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63607 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63607 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.707 killing process with pid 63607 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63607' 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63607 00:08:33.707 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63607 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.966 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:34.225 00:08:34.225 real 0m41.195s 00:08:34.225 user 1m4.372s 00:08:34.225 sys 0m13.067s 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:34.225 ************************************ 00:08:34.225 END TEST nvmf_lvs_grow 00:08:34.225 ************************************ 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.225 ************************************ 00:08:34.225 START TEST nvmf_bdev_io_wait 00:08:34.225 ************************************ 00:08:34.225 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:34.225 * Looking for test storage... 00:08:34.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:34.225 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:34.225 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:34.225 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.485 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:34.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.485 --rc genhtml_branch_coverage=1 00:08:34.486 --rc genhtml_function_coverage=1 00:08:34.486 --rc genhtml_legend=1 00:08:34.486 --rc geninfo_all_blocks=1 00:08:34.486 --rc geninfo_unexecuted_blocks=1 00:08:34.486 00:08:34.486 ' 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:34.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.486 --rc genhtml_branch_coverage=1 00:08:34.486 --rc genhtml_function_coverage=1 00:08:34.486 --rc genhtml_legend=1 00:08:34.486 --rc geninfo_all_blocks=1 00:08:34.486 --rc geninfo_unexecuted_blocks=1 00:08:34.486 00:08:34.486 ' 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:34.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.486 --rc genhtml_branch_coverage=1 00:08:34.486 --rc genhtml_function_coverage=1 00:08:34.486 --rc genhtml_legend=1 00:08:34.486 --rc geninfo_all_blocks=1 00:08:34.486 --rc geninfo_unexecuted_blocks=1 00:08:34.486 00:08:34.486 ' 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:34.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.486 --rc genhtml_branch_coverage=1 00:08:34.486 --rc genhtml_function_coverage=1 00:08:34.486 --rc genhtml_legend=1 00:08:34.486 --rc geninfo_all_blocks=1 00:08:34.486 --rc geninfo_unexecuted_blocks=1 00:08:34.486 00:08:34.486 ' 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.486 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:34.486 Cannot find device "nvmf_init_br" 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:34.486 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:34.486 Cannot find device "nvmf_init_br2" 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:34.487 Cannot find device "nvmf_tgt_br" 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.487 Cannot find device "nvmf_tgt_br2" 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:34.487 Cannot find device "nvmf_init_br" 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:34.487 Cannot find device "nvmf_init_br2" 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:34.487 Cannot find device "nvmf_tgt_br" 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:34.487 Cannot find device "nvmf_tgt_br2" 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:34.487 Cannot find device "nvmf_br" 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:34.487 Cannot find device "nvmf_init_if" 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:34.487 Cannot find device "nvmf_init_if2" 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:34.487 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:34.746 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:34.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:34.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:08:34.747 00:08:34.747 --- 10.0.0.3 ping statistics --- 00:08:34.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.747 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:34.747 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:34.747 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.100 ms 00:08:34.747 00:08:34.747 --- 10.0.0.4 ping statistics --- 00:08:34.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.747 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:34.747 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:35.006 00:08:35.006 --- 10.0.0.1 ping statistics --- 00:08:35.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.006 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:35.006 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:35.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:08:35.006 00:08:35.006 --- 10.0.0.2 ping statistics --- 00:08:35.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.006 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:08:35.006 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63964 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63964 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 63964 ']' 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.007 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.007 [2024-11-15 10:52:21.717506] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:35.007 [2024-11-15 10:52:21.717638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.266 [2024-11-15 10:52:21.865670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.266 [2024-11-15 10:52:21.938062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.266 [2024-11-15 10:52:21.938158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.266 [2024-11-15 10:52:21.938209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.266 [2024-11-15 10:52:21.938218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.266 [2024-11-15 10:52:21.938226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.266 [2024-11-15 10:52:21.939758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.266 [2024-11-15 10:52:21.939899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.266 [2024-11-15 10:52:21.939826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.266 [2024-11-15 10:52:21.939903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.204 [2024-11-15 10:52:22.835331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.204 [2024-11-15 10:52:22.849484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.204 Malloc0 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.204 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.205 [2024-11-15 10:52:22.918511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64004 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:36.205 { 00:08:36.205 "params": { 00:08:36.205 "name": "Nvme$subsystem", 00:08:36.205 "trtype": "$TEST_TRANSPORT", 00:08:36.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.205 "adrfam": "ipv4", 00:08:36.205 "trsvcid": "$NVMF_PORT", 00:08:36.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.205 "hdgst": ${hdgst:-false}, 00:08:36.205 "ddgst": ${ddgst:-false} 00:08:36.205 }, 00:08:36.205 "method": "bdev_nvme_attach_controller" 00:08:36.205 } 00:08:36.205 EOF 00:08:36.205 )") 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64006 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64008 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:36.205 { 00:08:36.205 "params": { 00:08:36.205 "name": "Nvme$subsystem", 00:08:36.205 "trtype": "$TEST_TRANSPORT", 00:08:36.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.205 "adrfam": "ipv4", 00:08:36.205 "trsvcid": "$NVMF_PORT", 00:08:36.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.205 "hdgst": ${hdgst:-false}, 00:08:36.205 "ddgst": ${ddgst:-false} 00:08:36.205 }, 00:08:36.205 "method": "bdev_nvme_attach_controller" 00:08:36.205 } 00:08:36.205 EOF 00:08:36.205 )") 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64011 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:36.205 { 00:08:36.205 "params": { 00:08:36.205 "name": "Nvme$subsystem", 00:08:36.205 "trtype": "$TEST_TRANSPORT", 00:08:36.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.205 "adrfam": "ipv4", 00:08:36.205 "trsvcid": "$NVMF_PORT", 00:08:36.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.205 "hdgst": ${hdgst:-false}, 00:08:36.205 "ddgst": ${ddgst:-false} 00:08:36.205 }, 00:08:36.205 "method": "bdev_nvme_attach_controller" 00:08:36.205 } 00:08:36.205 EOF 00:08:36.205 )") 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:36.205 { 00:08:36.205 "params": { 00:08:36.205 "name": "Nvme$subsystem", 00:08:36.205 "trtype": "$TEST_TRANSPORT", 00:08:36.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.205 "adrfam": "ipv4", 00:08:36.205 "trsvcid": "$NVMF_PORT", 00:08:36.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.205 "hdgst": ${hdgst:-false}, 00:08:36.205 "ddgst": ${ddgst:-false} 00:08:36.205 }, 00:08:36.205 "method": "bdev_nvme_attach_controller" 00:08:36.205 } 00:08:36.205 EOF 00:08:36.205 )") 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:36.205 "params": { 00:08:36.205 "name": "Nvme1", 00:08:36.205 "trtype": "tcp", 00:08:36.205 "traddr": "10.0.0.3", 00:08:36.205 "adrfam": "ipv4", 00:08:36.205 "trsvcid": "4420", 00:08:36.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.205 "hdgst": false, 00:08:36.205 "ddgst": false 00:08:36.205 }, 00:08:36.205 "method": "bdev_nvme_attach_controller" 00:08:36.205 }' 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:36.205 "params": { 00:08:36.205 "name": "Nvme1", 00:08:36.205 "trtype": "tcp", 00:08:36.205 "traddr": "10.0.0.3", 00:08:36.205 "adrfam": "ipv4", 00:08:36.205 "trsvcid": "4420", 00:08:36.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.205 "hdgst": false, 00:08:36.205 "ddgst": false 00:08:36.205 }, 00:08:36.205 "method": "bdev_nvme_attach_controller" 00:08:36.205 }' 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:36.205 "params": { 00:08:36.205 "name": "Nvme1", 00:08:36.205 "trtype": "tcp", 00:08:36.205 "traddr": "10.0.0.3", 00:08:36.205 "adrfam": "ipv4", 00:08:36.205 "trsvcid": "4420", 00:08:36.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.205 "hdgst": false, 00:08:36.205 "ddgst": false 00:08:36.205 }, 00:08:36.205 "method": "bdev_nvme_attach_controller" 00:08:36.205 }' 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:36.205 10:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:36.205 "params": { 00:08:36.205 "name": "Nvme1", 00:08:36.205 "trtype": "tcp", 00:08:36.205 "traddr": "10.0.0.3", 00:08:36.205 "adrfam": "ipv4", 00:08:36.205 "trsvcid": "4420", 00:08:36.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.205 "hdgst": false, 00:08:36.205 "ddgst": false 00:08:36.205 }, 00:08:36.205 "method": "bdev_nvme_attach_controller" 00:08:36.205 }' 00:08:36.205 [2024-11-15 10:52:22.990234] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:36.206 [2024-11-15 10:52:22.990495] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:36.206 [2024-11-15 10:52:23.007992] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:36.206 [2024-11-15 10:52:23.008235] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:36.206 [2024-11-15 10:52:23.016046] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:36.206 [2024-11-15 10:52:23.016286] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:36.206 10:52:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64004 00:08:36.206 [2024-11-15 10:52:23.040840] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:36.206 [2024-11-15 10:52:23.040960] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:36.465 [2024-11-15 10:52:23.225952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.465 [2024-11-15 10:52:23.285496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:36.465 [2024-11-15 10:52:23.299420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.465 [2024-11-15 10:52:23.300404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.724 [2024-11-15 10:52:23.359879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:36.724 [2024-11-15 10:52:23.373863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.724 [2024-11-15 10:52:23.384959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.724 [2024-11-15 10:52:23.444024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:36.724 [2024-11-15 10:52:23.457960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.724 [2024-11-15 10:52:23.471966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.724 Running I/O for 1 seconds... 00:08:36.724 Running I/O for 1 seconds... 00:08:36.724 [2024-11-15 10:52:23.533669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:36.724 [2024-11-15 10:52:23.547751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.983 Running I/O for 1 seconds... 00:08:36.983 Running I/O for 1 seconds... 00:08:37.919 169672.00 IOPS, 662.78 MiB/s 00:08:37.919 Latency(us) 00:08:37.919 [2024-11-15T10:52:24.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.919 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:37.919 Nvme1n1 : 1.00 169337.94 661.48 0.00 0.00 752.03 348.16 1980.97 00:08:37.919 [2024-11-15T10:52:24.780Z] =================================================================================================================== 00:08:37.919 [2024-11-15T10:52:24.780Z] Total : 169337.94 661.48 0.00 0.00 752.03 348.16 1980.97 00:08:37.919 8880.00 IOPS, 34.69 MiB/s 00:08:37.919 Latency(us) 00:08:37.919 [2024-11-15T10:52:24.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.919 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:37.919 Nvme1n1 : 1.01 8920.88 34.85 0.00 0.00 14272.99 7417.48 18111.77 00:08:37.919 [2024-11-15T10:52:24.780Z] =================================================================================================================== 00:08:37.919 [2024-11-15T10:52:24.780Z] Total : 8920.88 34.85 0.00 0.00 14272.99 7417.48 18111.77 00:08:37.919 5153.00 IOPS, 20.13 MiB/s 00:08:37.919 Latency(us) 00:08:37.919 [2024-11-15T10:52:24.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.919 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:37.919 Nvme1n1 : 1.01 5223.48 20.40 0.00 0.00 24347.11 11558.17 34078.72 00:08:37.919 [2024-11-15T10:52:24.780Z] =================================================================================================================== 00:08:37.919 [2024-11-15T10:52:24.780Z] Total : 5223.48 20.40 0.00 0.00 24347.11 11558.17 34078.72 00:08:37.919 6993.00 IOPS, 27.32 MiB/s 00:08:37.919 Latency(us) 00:08:37.919 [2024-11-15T10:52:24.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.919 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:37.919 Nvme1n1 : 1.01 7061.88 27.59 0.00 0.00 18028.63 5719.51 29789.09 00:08:37.919 [2024-11-15T10:52:24.780Z] =================================================================================================================== 00:08:37.919 [2024-11-15T10:52:24.780Z] Total : 7061.88 27.59 0.00 0.00 18028.63 5719.51 29789.09 00:08:38.178 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64006 00:08:38.178 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64008 00:08:38.178 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64011 00:08:38.178 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.178 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.178 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.178 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.178 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:38.178 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:38.178 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.178 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.179 rmmod nvme_tcp 00:08:38.179 rmmod nvme_fabrics 00:08:38.179 rmmod nvme_keyring 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63964 ']' 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63964 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 63964 ']' 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 63964 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.179 10:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63964 00:08:38.179 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.179 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.179 killing process with pid 63964 00:08:38.179 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63964' 00:08:38.179 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 63964 00:08:38.179 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 63964 00:08:38.438 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:38.438 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:38.438 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:38.438 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:38.438 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:38.438 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:38.438 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:38.438 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:38.438 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:38.438 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:38.438 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.697 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:38.957 00:08:38.957 real 0m4.602s 00:08:38.957 user 0m17.861s 00:08:38.957 sys 0m2.454s 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:38.957 ************************************ 00:08:38.957 END TEST nvmf_bdev_io_wait 00:08:38.957 ************************************ 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.957 ************************************ 00:08:38.957 START TEST nvmf_queue_depth 00:08:38.957 ************************************ 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:38.957 * Looking for test storage... 00:08:38.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:38.957 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.958 --rc genhtml_branch_coverage=1 00:08:38.958 --rc genhtml_function_coverage=1 00:08:38.958 --rc genhtml_legend=1 00:08:38.958 --rc geninfo_all_blocks=1 00:08:38.958 --rc geninfo_unexecuted_blocks=1 00:08:38.958 00:08:38.958 ' 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.958 --rc genhtml_branch_coverage=1 00:08:38.958 --rc genhtml_function_coverage=1 00:08:38.958 --rc genhtml_legend=1 00:08:38.958 --rc geninfo_all_blocks=1 00:08:38.958 --rc geninfo_unexecuted_blocks=1 00:08:38.958 00:08:38.958 ' 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.958 --rc genhtml_branch_coverage=1 00:08:38.958 --rc genhtml_function_coverage=1 00:08:38.958 --rc genhtml_legend=1 00:08:38.958 --rc geninfo_all_blocks=1 00:08:38.958 --rc geninfo_unexecuted_blocks=1 00:08:38.958 00:08:38.958 ' 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.958 --rc genhtml_branch_coverage=1 00:08:38.958 --rc genhtml_function_coverage=1 00:08:38.958 --rc genhtml_legend=1 00:08:38.958 --rc geninfo_all_blocks=1 00:08:38.958 --rc geninfo_unexecuted_blocks=1 00:08:38.958 00:08:38.958 ' 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.958 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.218 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:39.218 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:39.219 Cannot find device "nvmf_init_br" 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:39.219 Cannot find device "nvmf_init_br2" 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:39.219 Cannot find device "nvmf_tgt_br" 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:39.219 Cannot find device "nvmf_tgt_br2" 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:39.219 Cannot find device "nvmf_init_br" 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:39.219 Cannot find device "nvmf_init_br2" 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:39.219 Cannot find device "nvmf_tgt_br" 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:39.219 Cannot find device "nvmf_tgt_br2" 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:39.219 Cannot find device "nvmf_br" 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:39.219 Cannot find device "nvmf_init_if" 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:39.219 Cannot find device "nvmf_init_if2" 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:39.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:39.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:39.219 10:52:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:39.219 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:39.219 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:39.219 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:39.219 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:39.219 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:39.479 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:39.479 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:08:39.479 00:08:39.479 --- 10.0.0.3 ping statistics --- 00:08:39.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.479 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:39.479 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:39.479 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.104 ms 00:08:39.479 00:08:39.479 --- 10.0.0.4 ping statistics --- 00:08:39.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.479 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:39.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:08:39.479 00:08:39.479 --- 10.0.0.1 ping statistics --- 00:08:39.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.479 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:39.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:08:39.479 00:08:39.479 --- 10.0.0.2 ping statistics --- 00:08:39.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.479 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64299 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64299 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64299 ']' 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.479 10:52:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.739 [2024-11-15 10:52:26.346809] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:39.739 [2024-11-15 10:52:26.346908] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.739 [2024-11-15 10:52:26.506203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.739 [2024-11-15 10:52:26.566038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.739 [2024-11-15 10:52:26.566107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.739 [2024-11-15 10:52:26.566121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.739 [2024-11-15 10:52:26.566132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.739 [2024-11-15 10:52:26.566142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.739 [2024-11-15 10:52:26.566657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.031 [2024-11-15 10:52:26.623736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.617 [2024-11-15 10:52:27.396400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.617 Malloc0 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.617 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.618 [2024-11-15 10:52:27.443186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64331 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64331 /var/tmp/bdevperf.sock 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64331 ']' 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.618 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.877 [2024-11-15 10:52:27.505983] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:40.877 [2024-11-15 10:52:27.506069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64331 ] 00:08:40.877 [2024-11-15 10:52:27.653833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.877 [2024-11-15 10:52:27.709863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.135 [2024-11-15 10:52:27.779331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.135 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.135 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:41.135 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:41.135 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.135 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.135 NVMe0n1 00:08:41.135 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.135 10:52:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:41.395 Running I/O for 10 seconds... 00:08:43.268 7550.00 IOPS, 29.49 MiB/s [2024-11-15T10:52:31.066Z] 8209.00 IOPS, 32.07 MiB/s [2024-11-15T10:52:32.445Z] 8578.33 IOPS, 33.51 MiB/s [2024-11-15T10:52:33.383Z] 8750.25 IOPS, 34.18 MiB/s [2024-11-15T10:52:34.320Z] 8846.60 IOPS, 34.56 MiB/s [2024-11-15T10:52:35.257Z] 8906.83 IOPS, 34.79 MiB/s [2024-11-15T10:52:36.194Z] 8969.43 IOPS, 35.04 MiB/s [2024-11-15T10:52:37.130Z] 9060.50 IOPS, 35.39 MiB/s [2024-11-15T10:52:38.067Z] 9105.11 IOPS, 35.57 MiB/s [2024-11-15T10:52:38.326Z] 9132.60 IOPS, 35.67 MiB/s 00:08:51.465 Latency(us) 00:08:51.465 [2024-11-15T10:52:38.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.465 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:51.465 Verification LBA range: start 0x0 length 0x4000 00:08:51.465 NVMe0n1 : 10.08 9165.45 35.80 0.00 0.00 111278.70 24069.59 83409.45 00:08:51.465 [2024-11-15T10:52:38.326Z] =================================================================================================================== 00:08:51.465 [2024-11-15T10:52:38.326Z] Total : 9165.45 35.80 0.00 0.00 111278.70 24069.59 83409.45 00:08:51.465 { 00:08:51.465 "results": [ 00:08:51.465 { 00:08:51.465 "job": "NVMe0n1", 00:08:51.465 "core_mask": "0x1", 00:08:51.465 "workload": "verify", 00:08:51.465 "status": "finished", 00:08:51.465 "verify_range": { 00:08:51.465 "start": 0, 00:08:51.465 "length": 16384 00:08:51.465 }, 00:08:51.465 "queue_depth": 1024, 00:08:51.465 "io_size": 4096, 00:08:51.465 "runtime": 10.075879, 00:08:51.465 "iops": 9165.453455723316, 00:08:51.465 "mibps": 35.802552561419205, 00:08:51.465 "io_failed": 0, 00:08:51.465 "io_timeout": 0, 00:08:51.465 "avg_latency_us": 111278.7022524979, 00:08:51.465 "min_latency_us": 24069.585454545453, 00:08:51.465 "max_latency_us": 83409.45454545454 00:08:51.465 } 00:08:51.465 ], 00:08:51.465 "core_count": 1 00:08:51.465 } 00:08:51.465 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64331 00:08:51.465 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64331 ']' 00:08:51.465 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64331 00:08:51.465 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:51.465 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.465 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64331 00:08:51.465 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.465 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.465 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64331' 00:08:51.465 killing process with pid 64331 00:08:51.465 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64331 00:08:51.465 Received shutdown signal, test time was about 10.000000 seconds 00:08:51.465 00:08:51.465 Latency(us) 00:08:51.465 [2024-11-15T10:52:38.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.465 [2024-11-15T10:52:38.326Z] =================================================================================================================== 00:08:51.465 [2024-11-15T10:52:38.326Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:51.465 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64331 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.725 rmmod nvme_tcp 00:08:51.725 rmmod nvme_fabrics 00:08:51.725 rmmod nvme_keyring 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64299 ']' 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64299 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64299 ']' 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64299 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64299 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64299' 00:08:51.725 killing process with pid 64299 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64299 00:08:51.725 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64299 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:51.984 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:52.244 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:52.244 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:52.244 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:52.244 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:52.244 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:52.244 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:52.244 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:52.244 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:52.244 10:52:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:52.244 00:08:52.244 real 0m13.432s 00:08:52.244 user 0m22.081s 00:08:52.244 sys 0m2.612s 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.244 ************************************ 00:08:52.244 END TEST nvmf_queue_depth 00:08:52.244 ************************************ 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.244 ************************************ 00:08:52.244 START TEST nvmf_target_multipath 00:08:52.244 ************************************ 00:08:52.244 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:52.508 * Looking for test storage... 00:08:52.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.508 --rc genhtml_branch_coverage=1 00:08:52.508 --rc genhtml_function_coverage=1 00:08:52.508 --rc genhtml_legend=1 00:08:52.508 --rc geninfo_all_blocks=1 00:08:52.508 --rc geninfo_unexecuted_blocks=1 00:08:52.508 00:08:52.508 ' 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.508 --rc genhtml_branch_coverage=1 00:08:52.508 --rc genhtml_function_coverage=1 00:08:52.508 --rc genhtml_legend=1 00:08:52.508 --rc geninfo_all_blocks=1 00:08:52.508 --rc geninfo_unexecuted_blocks=1 00:08:52.508 00:08:52.508 ' 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.508 --rc genhtml_branch_coverage=1 00:08:52.508 --rc genhtml_function_coverage=1 00:08:52.508 --rc genhtml_legend=1 00:08:52.508 --rc geninfo_all_blocks=1 00:08:52.508 --rc geninfo_unexecuted_blocks=1 00:08:52.508 00:08:52.508 ' 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.508 --rc genhtml_branch_coverage=1 00:08:52.508 --rc genhtml_function_coverage=1 00:08:52.508 --rc genhtml_legend=1 00:08:52.508 --rc geninfo_all_blocks=1 00:08:52.508 --rc geninfo_unexecuted_blocks=1 00:08:52.508 00:08:52.508 ' 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.508 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.509 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:52.509 Cannot find device "nvmf_init_br" 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:52.509 Cannot find device "nvmf_init_br2" 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:52.509 Cannot find device "nvmf_tgt_br" 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:52.509 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:52.768 Cannot find device "nvmf_tgt_br2" 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:52.768 Cannot find device "nvmf_init_br" 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:52.768 Cannot find device "nvmf_init_br2" 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:52.768 Cannot find device "nvmf_tgt_br" 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:52.768 Cannot find device "nvmf_tgt_br2" 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:52.768 Cannot find device "nvmf_br" 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:52.768 Cannot find device "nvmf_init_if" 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:52.768 Cannot find device "nvmf_init_if2" 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:52.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.768 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:52.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:52.769 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:53.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:53.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:08:53.028 00:08:53.028 --- 10.0.0.3 ping statistics --- 00:08:53.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.028 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:53.028 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:53.028 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:08:53.028 00:08:53.028 --- 10.0.0.4 ping statistics --- 00:08:53.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.028 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:53.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:08:53.028 00:08:53.028 --- 10.0.0.1 ping statistics --- 00:08:53.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.028 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:53.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:08:53.028 00:08:53.028 --- 10.0.0.2 ping statistics --- 00:08:53.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.028 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64695 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64695 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64695 ']' 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.028 10:52:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:53.028 [2024-11-15 10:52:39.807360] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:53.028 [2024-11-15 10:52:39.807453] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.287 [2024-11-15 10:52:39.961249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.287 [2024-11-15 10:52:40.042822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.287 [2024-11-15 10:52:40.042893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.287 [2024-11-15 10:52:40.042908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.287 [2024-11-15 10:52:40.042919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.287 [2024-11-15 10:52:40.042929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.287 [2024-11-15 10:52:40.044553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.287 [2024-11-15 10:52:40.044707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.287 [2024-11-15 10:52:40.044850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.287 [2024-11-15 10:52:40.044860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.287 [2024-11-15 10:52:40.121833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.224 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.224 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:54.224 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.224 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.224 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:54.224 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.224 10:52:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:54.483 [2024-11-15 10:52:41.197249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.483 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:54.742 Malloc0 00:08:54.742 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:55.000 10:52:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.260 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:55.533 [2024-11-15 10:52:42.276407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:55.533 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:55.842 [2024-11-15 10:52:42.572730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:55.842 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid=02f14d39-9b07-4abc-bc4a-e88d43a336ca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:56.101 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid=02f14d39-9b07-4abc-bc4a-e88d43a336ca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:56.101 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:56.101 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:56.101 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.101 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:56.101 10:52:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64790 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:58.634 10:52:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:58.634 [global] 00:08:58.634 thread=1 00:08:58.634 invalidate=1 00:08:58.634 rw=randrw 00:08:58.634 time_based=1 00:08:58.634 runtime=6 00:08:58.634 ioengine=libaio 00:08:58.634 direct=1 00:08:58.634 bs=4096 00:08:58.634 iodepth=128 00:08:58.634 norandommap=0 00:08:58.634 numjobs=1 00:08:58.634 00:08:58.634 verify_dump=1 00:08:58.634 verify_backlog=512 00:08:58.634 verify_state_save=0 00:08:58.634 do_verify=1 00:08:58.634 verify=crc32c-intel 00:08:58.634 [job0] 00:08:58.634 filename=/dev/nvme0n1 00:08:58.634 Could not set queue depth (nvme0n1) 00:08:58.634 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:58.634 fio-3.35 00:08:58.634 Starting 1 thread 00:08:59.200 10:52:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:59.459 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:59.716 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:59.974 10:52:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:00.232 10:52:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64790 00:09:04.418 00:09:04.418 job0: (groupid=0, jobs=1): err= 0: pid=64811: Fri Nov 15 10:52:51 2024 00:09:04.418 read: IOPS=8746, BW=34.2MiB/s (35.8MB/s)(205MiB/6009msec) 00:09:04.418 slat (usec): min=3, max=7132, avg=69.35, stdev=275.56 00:09:04.418 clat (usec): min=2318, max=20430, avg=10043.33, stdev=1860.66 00:09:04.418 lat (usec): min=2338, max=20443, avg=10112.68, stdev=1867.35 00:09:04.418 clat percentiles (usec): 00:09:04.418 | 1.00th=[ 5145], 5.00th=[ 7373], 10.00th=[ 8160], 20.00th=[ 8848], 00:09:04.418 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:09:04.418 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11731], 95.00th=[13829], 00:09:04.418 | 99.00th=[16188], 99.50th=[16712], 99.90th=[18220], 99.95th=[19268], 00:09:04.418 | 99.99th=[20055] 00:09:04.418 bw ( KiB/s): min=10368, max=24440, per=50.77%, avg=17762.67, stdev=4933.72, samples=12 00:09:04.418 iops : min= 2592, max= 6110, avg=4440.67, stdev=1233.43, samples=12 00:09:04.418 write: IOPS=5169, BW=20.2MiB/s (21.2MB/s)(105MiB/5177msec); 0 zone resets 00:09:04.418 slat (usec): min=14, max=2408, avg=76.84, stdev=200.17 00:09:04.418 clat (usec): min=2239, max=21404, avg=8759.90, stdev=1669.58 00:09:04.418 lat (usec): min=2270, max=21426, avg=8836.74, stdev=1675.62 00:09:04.418 clat percentiles (usec): 00:09:04.418 | 1.00th=[ 3916], 5.00th=[ 5211], 10.00th=[ 6915], 20.00th=[ 7832], 00:09:04.418 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:09:04.418 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10814], 00:09:04.418 | 99.00th=[13829], 99.50th=[14877], 99.90th=[16909], 99.95th=[17433], 00:09:04.418 | 99.99th=[18744] 00:09:04.418 bw ( KiB/s): min=10808, max=24368, per=86.05%, avg=17792.08, stdev=4707.72, samples=12 00:09:04.418 iops : min= 2702, max= 6092, avg=4448.00, stdev=1176.92, samples=12 00:09:04.418 lat (msec) : 4=0.48%, 10=62.51%, 20=37.00%, 50=0.01% 00:09:04.418 cpu : usr=4.76%, sys=19.51%, ctx=4566, majf=0, minf=90 00:09:04.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:04.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.418 issued rwts: total=52555,26761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.418 00:09:04.418 Run status group 0 (all jobs): 00:09:04.418 READ: bw=34.2MiB/s (35.8MB/s), 34.2MiB/s-34.2MiB/s (35.8MB/s-35.8MB/s), io=205MiB (215MB), run=6009-6009msec 00:09:04.418 WRITE: bw=20.2MiB/s (21.2MB/s), 20.2MiB/s-20.2MiB/s (21.2MB/s-21.2MB/s), io=105MiB (110MB), run=5177-5177msec 00:09:04.418 00:09:04.418 Disk stats (read/write): 00:09:04.418 nvme0n1: ios=52104/26037, merge=0/0, ticks=503496/214641, in_queue=718137, util=98.72% 00:09:04.418 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:04.984 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64897 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:05.243 10:52:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:05.243 [global] 00:09:05.243 thread=1 00:09:05.243 invalidate=1 00:09:05.243 rw=randrw 00:09:05.243 time_based=1 00:09:05.243 runtime=6 00:09:05.243 ioengine=libaio 00:09:05.243 direct=1 00:09:05.243 bs=4096 00:09:05.243 iodepth=128 00:09:05.243 norandommap=0 00:09:05.243 numjobs=1 00:09:05.243 00:09:05.243 verify_dump=1 00:09:05.243 verify_backlog=512 00:09:05.243 verify_state_save=0 00:09:05.243 do_verify=1 00:09:05.243 verify=crc32c-intel 00:09:05.243 [job0] 00:09:05.243 filename=/dev/nvme0n1 00:09:05.243 Could not set queue depth (nvme0n1) 00:09:05.243 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.243 fio-3.35 00:09:05.243 Starting 1 thread 00:09:06.177 10:52:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:06.436 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:07.002 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:07.260 10:52:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:07.518 10:52:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64897 00:09:11.706 00:09:11.706 job0: (groupid=0, jobs=1): err= 0: pid=64918: Fri Nov 15 10:52:58 2024 00:09:11.706 read: IOPS=9780, BW=38.2MiB/s (40.1MB/s)(230MiB/6007msec) 00:09:11.706 slat (usec): min=4, max=8222, avg=51.37, stdev=225.71 00:09:11.706 clat (usec): min=362, max=23883, avg=9005.81, stdev=3033.98 00:09:11.706 lat (usec): min=383, max=23900, avg=9057.18, stdev=3039.25 00:09:11.706 clat percentiles (usec): 00:09:11.706 | 1.00th=[ 1680], 5.00th=[ 3228], 10.00th=[ 4883], 20.00th=[ 7504], 00:09:11.706 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:09:11.706 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12649], 95.00th=[14222], 00:09:11.706 | 99.00th=[17695], 99.50th=[19006], 99.90th=[21365], 99.95th=[21890], 00:09:11.706 | 99.99th=[22676] 00:09:11.706 bw ( KiB/s): min= 5944, max=26155, per=50.97%, avg=19940.00, stdev=6751.86, samples=11 00:09:11.706 iops : min= 1486, max= 6538, avg=4984.91, stdev=1687.88, samples=11 00:09:11.706 write: IOPS=5788, BW=22.6MiB/s (23.7MB/s)(121MiB/5338msec); 0 zone resets 00:09:11.706 slat (usec): min=14, max=2094, avg=58.67, stdev=154.75 00:09:11.706 clat (usec): min=664, max=21331, avg=7562.71, stdev=2599.36 00:09:11.706 lat (usec): min=691, max=21354, avg=7621.38, stdev=2606.92 00:09:11.706 clat percentiles (usec): 00:09:11.706 | 1.00th=[ 1680], 5.00th=[ 2606], 10.00th=[ 3490], 20.00th=[ 5800], 00:09:11.706 | 30.00th=[ 7046], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8160], 00:09:11.706 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10421], 95.00th=[11469], 00:09:11.706 | 99.00th=[14353], 99.50th=[15664], 99.90th=[18482], 99.95th=[19792], 00:09:11.706 | 99.99th=[21103] 00:09:11.706 bw ( KiB/s): min= 6168, max=27321, per=86.35%, avg=19992.27, stdev=6731.10, samples=11 00:09:11.706 iops : min= 1542, max= 6830, avg=4998.00, stdev=1682.73, samples=11 00:09:11.706 lat (usec) : 500=0.03%, 750=0.13%, 1000=0.12% 00:09:11.706 lat (msec) : 2=1.54%, 4=7.50%, 10=66.68%, 20=23.83%, 50=0.17% 00:09:11.706 cpu : usr=5.63%, sys=20.08%, ctx=5525, majf=0, minf=78 00:09:11.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:11.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:11.706 issued rwts: total=58753,30897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:11.706 00:09:11.706 Run status group 0 (all jobs): 00:09:11.706 READ: bw=38.2MiB/s (40.1MB/s), 38.2MiB/s-38.2MiB/s (40.1MB/s-40.1MB/s), io=230MiB (241MB), run=6007-6007msec 00:09:11.706 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=121MiB (127MB), run=5338-5338msec 00:09:11.706 00:09:11.706 Disk stats (read/write): 00:09:11.706 nvme0n1: ios=58134/30019, merge=0/0, ticks=504262/213994, in_queue=718256, util=98.68% 00:09:11.706 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:11.706 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.706 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:11.706 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:11.706 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.706 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:11.706 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.706 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:11.706 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.965 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:11.965 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:11.965 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:11.965 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:11.965 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.965 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:11.965 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.965 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:11.965 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.965 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.965 rmmod nvme_tcp 00:09:11.965 rmmod nvme_fabrics 00:09:11.965 rmmod nvme_keyring 00:09:11.965 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64695 ']' 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64695 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64695 ']' 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64695 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64695 00:09:12.223 killing process with pid 64695 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64695' 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64695 00:09:12.223 10:52:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64695 00:09:12.481 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.481 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.481 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.481 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:12.481 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:12.481 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.481 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.481 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.481 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:12.482 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:12.482 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:12.482 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:12.482 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.482 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:12.482 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:12.482 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:12.482 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:12.482 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:12.482 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:12.482 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:12.740 00:09:12.740 real 0m20.340s 00:09:12.740 user 1m16.484s 00:09:12.740 sys 0m8.689s 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.740 ************************************ 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:12.740 END TEST nvmf_target_multipath 00:09:12.740 ************************************ 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.740 ************************************ 00:09:12.740 START TEST nvmf_zcopy 00:09:12.740 ************************************ 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:12.740 * Looking for test storage... 00:09:12.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:12.740 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:13.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.000 --rc genhtml_branch_coverage=1 00:09:13.000 --rc genhtml_function_coverage=1 00:09:13.000 --rc genhtml_legend=1 00:09:13.000 --rc geninfo_all_blocks=1 00:09:13.000 --rc geninfo_unexecuted_blocks=1 00:09:13.000 00:09:13.000 ' 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:13.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.000 --rc genhtml_branch_coverage=1 00:09:13.000 --rc genhtml_function_coverage=1 00:09:13.000 --rc genhtml_legend=1 00:09:13.000 --rc geninfo_all_blocks=1 00:09:13.000 --rc geninfo_unexecuted_blocks=1 00:09:13.000 00:09:13.000 ' 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:13.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.000 --rc genhtml_branch_coverage=1 00:09:13.000 --rc genhtml_function_coverage=1 00:09:13.000 --rc genhtml_legend=1 00:09:13.000 --rc geninfo_all_blocks=1 00:09:13.000 --rc geninfo_unexecuted_blocks=1 00:09:13.000 00:09:13.000 ' 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:13.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.000 --rc genhtml_branch_coverage=1 00:09:13.000 --rc genhtml_function_coverage=1 00:09:13.000 --rc genhtml_legend=1 00:09:13.000 --rc geninfo_all_blocks=1 00:09:13.000 --rc geninfo_unexecuted_blocks=1 00:09:13.000 00:09:13.000 ' 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.000 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.001 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:13.001 Cannot find device "nvmf_init_br" 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:13.001 Cannot find device "nvmf_init_br2" 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:13.001 Cannot find device "nvmf_tgt_br" 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.001 Cannot find device "nvmf_tgt_br2" 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:13.001 Cannot find device "nvmf_init_br" 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:13.001 Cannot find device "nvmf_init_br2" 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:13.001 Cannot find device "nvmf_tgt_br" 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:13.001 Cannot find device "nvmf_tgt_br2" 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:13.001 Cannot find device "nvmf_br" 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:13.001 Cannot find device "nvmf_init_if" 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:13.001 Cannot find device "nvmf_init_if2" 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:13.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:13.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:13.001 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:13.260 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:13.260 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:13.260 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:13.260 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:13.260 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:13.260 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:13.260 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:13.261 10:52:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:13.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:13.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:09:13.261 00:09:13.261 --- 10.0.0.3 ping statistics --- 00:09:13.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.261 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:13.261 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:13.261 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:09:13.261 00:09:13.261 --- 10.0.0.4 ping statistics --- 00:09:13.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.261 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:13.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:09:13.261 00:09:13.261 --- 10.0.0.1 ping statistics --- 00:09:13.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.261 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:13.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:09:13.261 00:09:13.261 --- 10.0.0.2 ping statistics --- 00:09:13.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.261 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65222 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65222 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65222 ']' 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.261 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.519 [2024-11-15 10:53:00.155686] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:09:13.519 [2024-11-15 10:53:00.155780] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.519 [2024-11-15 10:53:00.310657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.519 [2024-11-15 10:53:00.372893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.519 [2024-11-15 10:53:00.372957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.519 [2024-11-15 10:53:00.372972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.519 [2024-11-15 10:53:00.372983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.519 [2024-11-15 10:53:00.372993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.519 [2024-11-15 10:53:00.373501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.777 [2024-11-15 10:53:00.449456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.777 [2024-11-15 10:53:00.576354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.777 [2024-11-15 10:53:00.592453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.777 malloc0 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.777 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:13.778 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:13.778 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:13.778 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:13.778 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:13.778 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:13.778 { 00:09:13.778 "params": { 00:09:13.778 "name": "Nvme$subsystem", 00:09:13.778 "trtype": "$TEST_TRANSPORT", 00:09:13.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.778 "adrfam": "ipv4", 00:09:13.778 "trsvcid": "$NVMF_PORT", 00:09:13.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.778 "hdgst": ${hdgst:-false}, 00:09:13.778 "ddgst": ${ddgst:-false} 00:09:13.778 }, 00:09:13.778 "method": "bdev_nvme_attach_controller" 00:09:13.778 } 00:09:13.778 EOF 00:09:13.778 )") 00:09:14.036 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:14.036 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:14.036 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:14.036 10:53:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:14.036 "params": { 00:09:14.036 "name": "Nvme1", 00:09:14.036 "trtype": "tcp", 00:09:14.036 "traddr": "10.0.0.3", 00:09:14.036 "adrfam": "ipv4", 00:09:14.036 "trsvcid": "4420", 00:09:14.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.036 "hdgst": false, 00:09:14.036 "ddgst": false 00:09:14.036 }, 00:09:14.036 "method": "bdev_nvme_attach_controller" 00:09:14.036 }' 00:09:14.036 [2024-11-15 10:53:00.692999] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:09:14.036 [2024-11-15 10:53:00.693105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65247 ] 00:09:14.036 [2024-11-15 10:53:00.845459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.295 [2024-11-15 10:53:00.907317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.296 [2024-11-15 10:53:00.990067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:14.296 Running I/O for 10 seconds... 00:09:16.606 7029.00 IOPS, 54.91 MiB/s [2024-11-15T10:53:04.403Z] 6821.50 IOPS, 53.29 MiB/s [2024-11-15T10:53:05.350Z] 6554.33 IOPS, 51.21 MiB/s [2024-11-15T10:53:06.287Z] 6204.00 IOPS, 48.47 MiB/s [2024-11-15T10:53:07.222Z] 5993.20 IOPS, 46.82 MiB/s [2024-11-15T10:53:08.157Z] 5853.50 IOPS, 45.73 MiB/s [2024-11-15T10:53:09.533Z] 5762.43 IOPS, 45.02 MiB/s [2024-11-15T10:53:10.470Z] 5693.38 IOPS, 44.48 MiB/s [2024-11-15T10:53:11.427Z] 5633.44 IOPS, 44.01 MiB/s [2024-11-15T10:53:11.427Z] 5598.30 IOPS, 43.74 MiB/s 00:09:24.566 Latency(us) 00:09:24.566 [2024-11-15T10:53:11.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.566 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:24.566 Verification LBA range: start 0x0 length 0x1000 00:09:24.566 Nvme1n1 : 10.02 5601.00 43.76 0.00 0.00 22792.88 3187.43 26929.34 00:09:24.566 [2024-11-15T10:53:11.427Z] =================================================================================================================== 00:09:24.566 [2024-11-15T10:53:11.427Z] Total : 5601.00 43.76 0.00 0.00 22792.88 3187.43 26929.34 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65370 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:24.566 { 00:09:24.566 "params": { 00:09:24.566 "name": "Nvme$subsystem", 00:09:24.566 "trtype": "$TEST_TRANSPORT", 00:09:24.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.566 "adrfam": "ipv4", 00:09:24.566 "trsvcid": "$NVMF_PORT", 00:09:24.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.566 "hdgst": ${hdgst:-false}, 00:09:24.566 "ddgst": ${ddgst:-false} 00:09:24.566 }, 00:09:24.566 "method": "bdev_nvme_attach_controller" 00:09:24.566 } 00:09:24.566 EOF 00:09:24.566 )") 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:24.566 [2024-11-15 10:53:11.406481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.566 [2024-11-15 10:53:11.406542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:24.566 10:53:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:24.566 "params": { 00:09:24.566 "name": "Nvme1", 00:09:24.566 "trtype": "tcp", 00:09:24.566 "traddr": "10.0.0.3", 00:09:24.566 "adrfam": "ipv4", 00:09:24.566 "trsvcid": "4420", 00:09:24.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.566 "hdgst": false, 00:09:24.566 "ddgst": false 00:09:24.566 }, 00:09:24.566 "method": "bdev_nvme_attach_controller" 00:09:24.566 }' 00:09:24.566 [2024-11-15 10:53:11.414444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.566 [2024-11-15 10:53:11.414478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.566 [2024-11-15 10:53:11.422440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.567 [2024-11-15 10:53:11.422471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.434450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.434483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.446448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.446478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.458449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.458480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.460539] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:09:24.825 [2024-11-15 10:53:11.460656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65370 ] 00:09:24.825 [2024-11-15 10:53:11.470453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.470483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.482456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.482486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.494461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.494490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.506461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.506491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.518465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.518497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.530466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.530497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.542475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.542507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.554471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.554502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.566472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.566502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.578477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.578507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.590476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.590506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.598480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.598509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.606483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.606513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.608665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.825 [2024-11-15 10:53:11.614491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.614522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.626489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.626518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.634490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.634520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.642488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.642516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.650493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.650522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.658512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.658551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.660156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.825 [2024-11-15 10:53:11.666499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.666543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.674511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.674555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.825 [2024-11-15 10:53:11.682520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.825 [2024-11-15 10:53:11.682558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.690513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.690552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.698526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.698573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.706515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.706553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.714518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.714557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.722521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.722557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.730523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.730567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.738522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.738587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.743348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:25.084 [2024-11-15 10:53:11.746528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.746565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.754533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.754572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.762531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.762603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.770537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.770598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.778555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.778597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.786558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.786609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.794558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.794602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.802590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.802624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.810608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.810654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.818626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.818672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.826653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.826699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.834685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.834748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.842638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.842688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.850666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.850710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.858687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.084 [2024-11-15 10:53:11.858738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.084 [2024-11-15 10:53:11.866660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.085 [2024-11-15 10:53:11.866690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.085 Running I/O for 5 seconds... 00:09:25.085 [2024-11-15 10:53:11.874665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.085 [2024-11-15 10:53:11.874709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.085 [2024-11-15 10:53:11.888790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.085 [2024-11-15 10:53:11.888842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.085 [2024-11-15 10:53:11.902043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.085 [2024-11-15 10:53:11.902095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.085 [2024-11-15 10:53:11.918225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.085 [2024-11-15 10:53:11.918299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.085 [2024-11-15 10:53:11.934720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.085 [2024-11-15 10:53:11.934770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:11.951020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:11.951070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:11.962706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:11.962753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:11.975877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:11.975929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:11.987856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:11.987891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.005460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.005495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.020792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.020842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.030530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.030575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.047215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.047280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.062076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.062126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.071422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.071458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.085354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.085400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.096704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.096752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.114000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.114077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.129981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.130025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.147090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.147125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.157221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.157259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.170357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.170395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.183044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.183090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.343 [2024-11-15 10:53:12.194748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.343 [2024-11-15 10:53:12.194797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.600 [2024-11-15 10:53:12.211325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.600 [2024-11-15 10:53:12.211367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.600 [2024-11-15 10:53:12.228059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.600 [2024-11-15 10:53:12.228091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.238101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.238164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.251913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.251959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.264862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.264909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.281601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.281658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.295919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.295975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.306916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.306981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.318902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.318965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.330970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.331035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.348372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.348423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.365247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.365295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.376199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.376233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.390941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.391003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.407504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.407566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.417985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.418021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.431445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.431496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.601 [2024-11-15 10:53:12.443329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.601 [2024-11-15 10:53:12.443362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.460309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.460345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.476400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.476434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.486549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.486585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.499641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.499673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.511554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.511619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.528092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.528134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.543582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.543661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.553115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.553151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.566385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.566423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.578980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.579030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.590964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.591014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.607293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.607327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.622934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.622981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.633618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.633698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.645330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.645373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.658683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.658732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.671021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.671072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.683383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.683449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.698409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.698447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.858 [2024-11-15 10:53:12.709267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.858 [2024-11-15 10:53:12.709301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.722525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 [2024-11-15 10:53:12.722576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.737029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 [2024-11-15 10:53:12.737095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.747038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 [2024-11-15 10:53:12.747124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.759573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 [2024-11-15 10:53:12.759638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.771717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 [2024-11-15 10:53:12.771762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.787918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 [2024-11-15 10:53:12.787984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.804111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 [2024-11-15 10:53:12.804147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.815489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 [2024-11-15 10:53:12.815565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.827792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 [2024-11-15 10:53:12.827862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.840811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 [2024-11-15 10:53:12.840845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.856926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 [2024-11-15 10:53:12.856994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.873583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.116 10248.00 IOPS, 80.06 MiB/s [2024-11-15T10:53:12.977Z] [2024-11-15 10:53:12.873628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.116 [2024-11-15 10:53:12.890883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.117 [2024-11-15 10:53:12.890928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.117 [2024-11-15 10:53:12.905016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.117 [2024-11-15 10:53:12.905050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.117 [2024-11-15 10:53:12.915751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.117 [2024-11-15 10:53:12.915817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.117 [2024-11-15 10:53:12.927430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.117 [2024-11-15 10:53:12.927464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.117 [2024-11-15 10:53:12.940279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.117 [2024-11-15 10:53:12.940344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.117 [2024-11-15 10:53:12.956897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.117 [2024-11-15 10:53:12.956930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.117 [2024-11-15 10:53:12.973633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.117 [2024-11-15 10:53:12.973679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:12.984701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:12.984734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:12.999661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:12.999722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.016247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.016283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.026211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.026279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.039791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.039824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.051983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.052018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.065049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.065083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.082618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.082656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.098386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.098424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.118098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.118147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.129134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.129167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.143096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.143161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.159670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.159744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.171870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.171918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.186282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.186320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.201462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.201500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.375 [2024-11-15 10:53:13.218359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.375 [2024-11-15 10:53:13.218395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.234866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.234930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.246087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.246135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.259043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.259093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.272264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.272298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.285160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.285226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.302531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.302636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.317802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.317835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.329282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.329335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.347343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.347394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.358085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.358119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.371544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.371607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.386208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.386284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.402312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.402350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.413005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.413037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.427253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.427289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.439219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.439255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.454861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.454906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.471628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.471692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.634 [2024-11-15 10:53:13.482033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.634 [2024-11-15 10:53:13.482084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.495654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.495704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.508238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.508286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.524507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.524552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.541557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.541618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.553120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.553170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.567692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.567749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.584034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.584077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.595152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.595202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.609003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.609063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.625140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.625192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.636441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.636475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.649829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.649863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.661786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.661819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.674912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.674960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.691449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.691553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.708330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.708392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.719505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.719553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.735909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.735958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.893 [2024-11-15 10:53:13.750584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.893 [2024-11-15 10:53:13.750651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.761271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.761334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.775253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.775297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.786905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.786965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.803415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.803465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.818484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.818555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.829710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.829792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.840718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.840767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.856209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.856247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.865621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.865655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 10220.00 IOPS, 79.84 MiB/s [2024-11-15T10:53:14.013Z] [2024-11-15 10:53:13.882508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.882591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.892647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.892682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.903167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.903350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.915447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.915482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.931286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.931322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.948445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.948482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.962682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.962717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.977611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.977643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:13.988356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:13.988389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.152 [2024-11-15 10:53:14.004142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.152 [2024-11-15 10:53:14.004295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.019217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.019375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.027355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.027388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.038212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.038410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.055904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.055937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.072556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.072588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.083842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.083885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.100108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.100140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.117049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.117212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.126280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.126316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.139270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.139303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.146868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.146902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.158529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.158589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.169889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.169922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.178896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.178929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.190088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.190122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.201859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.201893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.211976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.212141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.224432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.224471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.235072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.235240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.411 [2024-11-15 10:53:14.246895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.411 [2024-11-15 10:53:14.246930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.412 [2024-11-15 10:53:14.255357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.412 [2024-11-15 10:53:14.255391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.412 [2024-11-15 10:53:14.267506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.412 [2024-11-15 10:53:14.267569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.277963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.278119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.292081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.292117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.307489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.307553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.318603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.318637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.327089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.327123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.339229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.339265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.349125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.349159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.363408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.363443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.372403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.372598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.386128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.386162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.395658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.395700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.410071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.410104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.419078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.419252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.431459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.431511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.441142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.441176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.452631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.452664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.462071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.462323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.477003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.477169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.486791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.486825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.501041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.670 [2024-11-15 10:53:14.501074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.670 [2024-11-15 10:53:14.519537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.671 [2024-11-15 10:53:14.519621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.530691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.530740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.542527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.542593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.558228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.558304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.575289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.575463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.584655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.584689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.595922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.595956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.607083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.607116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.615388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.615422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.627120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.627153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.638166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.638356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.647496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.647556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.661562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.661606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.671103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.671137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.686483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.686519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.703242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.703278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.712659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.712691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.727545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.727624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.736320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.736488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.748406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.748603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.757752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.757787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.767573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.767633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.930 [2024-11-15 10:53:14.781682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.930 [2024-11-15 10:53:14.781714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.189 [2024-11-15 10:53:14.791843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.189 [2024-11-15 10:53:14.791875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.189 [2024-11-15 10:53:14.801364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.189 [2024-11-15 10:53:14.801397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.189 [2024-11-15 10:53:14.815485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.189 [2024-11-15 10:53:14.815719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.825156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.825206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.839899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.839933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.855170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.855333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.865269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.865304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.873111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.873144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 11104.00 IOPS, 86.75 MiB/s [2024-11-15T10:53:15.051Z] [2024-11-15 10:53:14.885094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.885127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.902275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.902446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.919510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.919702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.930088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.930235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.938626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.938793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.949954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.950101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.967476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.967512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.976954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.977120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.991076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.991112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:14.999737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:14.999770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:15.014082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:15.014116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:15.022419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:15.022454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.190 [2024-11-15 10:53:15.036496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.190 [2024-11-15 10:53:15.036557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.052098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.052131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.070912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.070947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.080461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.080494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.093558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.093590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.108156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.108204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.116384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.116417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.128040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.128075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.138932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.138966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.149909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.149943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.161968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.162003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.170770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.170803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.182089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.182123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.191650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.191681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.205366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.205565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.213871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.213918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.226671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.226724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.237610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.237653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.247749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.247781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.258030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.258062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.268291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.268325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.282191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.282225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.291030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.291062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.449 [2024-11-15 10:53:15.301584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.449 [2024-11-15 10:53:15.301616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.317927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.317961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.326351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.326389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.336176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.336212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.345386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.345419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.359241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.359411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.369161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.369196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.381246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.381419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.391886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.392088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.402151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.402187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.412470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.412507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.422144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.422179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.431904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.431939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.441722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.441757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.455246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.455478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.464355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.464389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.477994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.478165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.487089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.487123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.501178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.501357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.517004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.517039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.526739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.526772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.541377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.709 [2024-11-15 10:53:15.541411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.709 [2024-11-15 10:53:15.551100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.710 [2024-11-15 10:53:15.551135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.710 [2024-11-15 10:53:15.561761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.710 [2024-11-15 10:53:15.561794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.969 [2024-11-15 10:53:15.574063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.969 [2024-11-15 10:53:15.574097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.969 [2024-11-15 10:53:15.581996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.969 [2024-11-15 10:53:15.582029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.969 [2024-11-15 10:53:15.593129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.969 [2024-11-15 10:53:15.593293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.969 [2024-11-15 10:53:15.604261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.969 [2024-11-15 10:53:15.604423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.969 [2024-11-15 10:53:15.612473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.969 [2024-11-15 10:53:15.612506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.969 [2024-11-15 10:53:15.623699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.969 [2024-11-15 10:53:15.623732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.969 [2024-11-15 10:53:15.634239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.969 [2024-11-15 10:53:15.634315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.969 [2024-11-15 10:53:15.642246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.969 [2024-11-15 10:53:15.642308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.969 [2024-11-15 10:53:15.653878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.653911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.664813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.664847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.673129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.673161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.684413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.684445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.695219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.695383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.703248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.703406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.715247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.715406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.731255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.731425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.748913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.749077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.758716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.758869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.769462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.769700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.781371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.781554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.789964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.790122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.800871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.801017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.810144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.810343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.970 [2024-11-15 10:53:15.820366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.970 [2024-11-15 10:53:15.820554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.831310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.831479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.842848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.842995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.859435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.859613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 11555.50 IOPS, 90.28 MiB/s [2024-11-15T10:53:16.090Z] [2024-11-15 10:53:15.874462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.874681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.884059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.884225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.894831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.894996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.904684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.904851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.914642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.914821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.928754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.928921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.937447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.937665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.947991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.948155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.958158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.958361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.968297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.968461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.978222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.978418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.987888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.988054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:15.997273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:15.997440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:16.006622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:16.006801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:16.016476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:16.016679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:16.026330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:16.026366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:16.035832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:16.035999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:16.045353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:16.045387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:16.054878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:16.055042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:16.064400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:16.064434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:16.073722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:16.073757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.229 [2024-11-15 10:53:16.083310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.229 [2024-11-15 10:53:16.083346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.094233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.094296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.105811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.105844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.121930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.121963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.130705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.130742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.142341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.142376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.154168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.154236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.170051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.170087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.187896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.187930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.197714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.197748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.208247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.208282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.218262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.218331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.227929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.227962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.237457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.237673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.252071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.252107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.262109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.262143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.276453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.276487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.294724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.294772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.304653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.304687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.318218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.318306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.327033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.327067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.488 [2024-11-15 10:53:16.337263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.488 [2024-11-15 10:53:16.337296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.747 [2024-11-15 10:53:16.347481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.747 [2024-11-15 10:53:16.347514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.747 [2024-11-15 10:53:16.357510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.747 [2024-11-15 10:53:16.357554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.747 [2024-11-15 10:53:16.367226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.747 [2024-11-15 10:53:16.367260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.747 [2024-11-15 10:53:16.376962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.747 [2024-11-15 10:53:16.376995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.747 [2024-11-15 10:53:16.387028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.747 [2024-11-15 10:53:16.387061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.747 [2024-11-15 10:53:16.397302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.747 [2024-11-15 10:53:16.397338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.747 [2024-11-15 10:53:16.409869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.747 [2024-11-15 10:53:16.409921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.747 [2024-11-15 10:53:16.418939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.747 [2024-11-15 10:53:16.418972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.747 [2024-11-15 10:53:16.431282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.747 [2024-11-15 10:53:16.431316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.747 [2024-11-15 10:53:16.441132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.747 [2024-11-15 10:53:16.441166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.747 [2024-11-15 10:53:16.452761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.747 [2024-11-15 10:53:16.452795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.464917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.464952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.476753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.476788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.487119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.487294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.498023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.498226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.510449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.510488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.519313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.519347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.533643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.533676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.542157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.542207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.556794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.556958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.565963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.565998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.581791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.581824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.593012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.593046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.748 [2024-11-15 10:53:16.602379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.748 [2024-11-15 10:53:16.602416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.007 [2024-11-15 10:53:16.614315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.007 [2024-11-15 10:53:16.614355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.007 [2024-11-15 10:53:16.624564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.007 [2024-11-15 10:53:16.624598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.007 [2024-11-15 10:53:16.635036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.007 [2024-11-15 10:53:16.635069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.007 [2024-11-15 10:53:16.644855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.007 [2024-11-15 10:53:16.644898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.007 [2024-11-15 10:53:16.656221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.656387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.671699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.671733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.682810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.682976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.699252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.699286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.716230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.716266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.725397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.725429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.734790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.734823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.744856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.744891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.755047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.755080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.769075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.769109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.777640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.777674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.789589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.789795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.804707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.804876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.820316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.820547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.830186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.830372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.841635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.841793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.851969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.852133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.008 [2024-11-15 10:53:16.862122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.008 [2024-11-15 10:53:16.862351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.267 [2024-11-15 10:53:16.873031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.267 [2024-11-15 10:53:16.873189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.267 11758.80 IOPS, 91.87 MiB/s [2024-11-15T10:53:17.128Z] [2024-11-15 10:53:16.881625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.267 [2024-11-15 10:53:16.881779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.267 00:09:30.267 Latency(us) 00:09:30.267 [2024-11-15T10:53:17.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.267 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:30.267 Nvme1n1 : 5.01 11763.59 91.90 0.00 0.00 10869.92 4170.47 23712.12 00:09:30.267 [2024-11-15T10:53:17.128Z] =================================================================================================================== 00:09:30.267 [2024-11-15T10:53:17.128Z] Total : 11763.59 91.90 0.00 0.00 10869.92 4170.47 23712.12 00:09:30.267 [2024-11-15 10:53:16.889117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.267 [2024-11-15 10:53:16.889286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.267 [2024-11-15 10:53:16.897113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.267 [2024-11-15 10:53:16.897262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:16.905114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:16.905254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:16.913114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:16.913258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:16.921115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:16.921260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:16.929118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:16.929272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:16.937118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:16.937258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:16.949122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:16.949260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:16.957123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:16.957261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:16.969127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:16.969263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:16.977127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:16.977268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:16.985130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:16.985267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:16.993129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:16.993268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.001134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.001320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.013145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.013321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.021141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.021330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.029141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.029307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.037143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.037282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.045147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.045296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.057149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.057287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.065152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.065294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.073154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.073293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.081155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.081292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.089155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.089292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.101163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.101300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.109160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.109296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.268 [2024-11-15 10:53:17.121173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.268 [2024-11-15 10:53:17.121344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.527 [2024-11-15 10:53:17.129185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.527 [2024-11-15 10:53:17.129355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.527 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65370) - No such process 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65370 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.527 delay0 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.527 10:53:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:30.527 [2024-11-15 10:53:17.328760] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:37.092 Initializing NVMe Controllers 00:09:37.092 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:37.092 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:37.092 Initialization complete. Launching workers. 00:09:37.092 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 110 00:09:37.092 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 397, failed to submit 33 00:09:37.092 success 268, unsuccessful 129, failed 0 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:37.092 rmmod nvme_tcp 00:09:37.092 rmmod nvme_fabrics 00:09:37.092 rmmod nvme_keyring 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65222 ']' 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65222 00:09:37.092 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65222 ']' 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65222 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65222 00:09:37.093 killing process with pid 65222 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65222' 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65222 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65222 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:37.093 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.352 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.352 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:37.352 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.352 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.352 10:53:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.352 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:37.352 00:09:37.352 real 0m24.529s 00:09:37.352 user 0m38.875s 00:09:37.352 sys 0m7.796s 00:09:37.352 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.352 ************************************ 00:09:37.352 END TEST nvmf_zcopy 00:09:37.352 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.352 ************************************ 00:09:37.352 10:53:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:37.352 10:53:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.352 10:53:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.352 10:53:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.352 ************************************ 00:09:37.352 START TEST nvmf_nmic 00:09:37.352 ************************************ 00:09:37.352 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:37.352 * Looking for test storage... 00:09:37.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:37.352 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.352 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.352 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.632 --rc genhtml_branch_coverage=1 00:09:37.632 --rc genhtml_function_coverage=1 00:09:37.632 --rc genhtml_legend=1 00:09:37.632 --rc geninfo_all_blocks=1 00:09:37.632 --rc geninfo_unexecuted_blocks=1 00:09:37.632 00:09:37.632 ' 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.632 --rc genhtml_branch_coverage=1 00:09:37.632 --rc genhtml_function_coverage=1 00:09:37.632 --rc genhtml_legend=1 00:09:37.632 --rc geninfo_all_blocks=1 00:09:37.632 --rc geninfo_unexecuted_blocks=1 00:09:37.632 00:09:37.632 ' 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.632 --rc genhtml_branch_coverage=1 00:09:37.632 --rc genhtml_function_coverage=1 00:09:37.632 --rc genhtml_legend=1 00:09:37.632 --rc geninfo_all_blocks=1 00:09:37.632 --rc geninfo_unexecuted_blocks=1 00:09:37.632 00:09:37.632 ' 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.632 --rc genhtml_branch_coverage=1 00:09:37.632 --rc genhtml_function_coverage=1 00:09:37.632 --rc genhtml_legend=1 00:09:37.632 --rc geninfo_all_blocks=1 00:09:37.632 --rc geninfo_unexecuted_blocks=1 00:09:37.632 00:09:37.632 ' 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.632 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.633 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:37.633 Cannot find device "nvmf_init_br" 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:37.633 Cannot find device "nvmf_init_br2" 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:37.633 Cannot find device "nvmf_tgt_br" 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.633 Cannot find device "nvmf_tgt_br2" 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:37.633 Cannot find device "nvmf_init_br" 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:37.633 Cannot find device "nvmf_init_br2" 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:37.633 Cannot find device "nvmf_tgt_br" 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:37.633 Cannot find device "nvmf_tgt_br2" 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:37.633 Cannot find device "nvmf_br" 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:37.633 Cannot find device "nvmf_init_if" 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:37.633 Cannot find device "nvmf_init_if2" 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.633 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:37.633 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:37.929 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:37.929 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:09:37.929 00:09:37.929 --- 10.0.0.3 ping statistics --- 00:09:37.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.929 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:37.929 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:37.929 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:09:37.929 00:09:37.929 --- 10.0.0.4 ping statistics --- 00:09:37.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.929 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:37.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:37.929 00:09:37.929 --- 10.0.0.1 ping statistics --- 00:09:37.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.929 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:37.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:37.929 00:09:37.929 --- 10.0.0.2 ping statistics --- 00:09:37.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.929 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:37.929 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65746 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65746 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65746 ']' 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.930 10:53:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:38.188 [2024-11-15 10:53:24.844879] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:09:38.188 [2024-11-15 10:53:24.845049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.188 [2024-11-15 10:53:24.997201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.446 [2024-11-15 10:53:25.064681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.446 [2024-11-15 10:53:25.064773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.446 [2024-11-15 10:53:25.064789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.446 [2024-11-15 10:53:25.064799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.446 [2024-11-15 10:53:25.064809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.446 [2024-11-15 10:53:25.066411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.446 [2024-11-15 10:53:25.066592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.447 [2024-11-15 10:53:25.066463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.447 [2024-11-15 10:53:25.066595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.447 [2024-11-15 10:53:25.148573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.383 [2024-11-15 10:53:25.977696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.383 10:53:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.383 Malloc0 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.383 [2024-11-15 10:53:26.052316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.383 test case1: single bdev can't be used in multiple subsystems 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.383 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.383 [2024-11-15 10:53:26.076106] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:39.383 [2024-11-15 10:53:26.076152] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:39.383 [2024-11-15 10:53:26.076174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.383 request: 00:09:39.383 { 00:09:39.383 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:39.383 "namespace": { 00:09:39.383 "bdev_name": "Malloc0", 00:09:39.383 "no_auto_visible": false 00:09:39.383 }, 00:09:39.383 "method": "nvmf_subsystem_add_ns", 00:09:39.384 "req_id": 1 00:09:39.384 } 00:09:39.384 Got JSON-RPC error response 00:09:39.384 response: 00:09:39.384 { 00:09:39.384 "code": -32602, 00:09:39.384 "message": "Invalid parameters" 00:09:39.384 } 00:09:39.384 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:39.384 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:39.384 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:39.384 Adding namespace failed - expected result. 00:09:39.384 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:39.384 test case2: host connect to nvmf target in multiple paths 00:09:39.384 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:39.384 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:39.384 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.384 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:39.384 [2024-11-15 10:53:26.088193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:39.384 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.384 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid=02f14d39-9b07-4abc-bc4a-e88d43a336ca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:39.384 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid=02f14d39-9b07-4abc-bc4a-e88d43a336ca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:39.643 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:39.643 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:39.643 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.643 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:39.643 10:53:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:41.551 10:53:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:41.551 10:53:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:41.551 10:53:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.551 10:53:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:41.551 10:53:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.551 10:53:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:41.551 10:53:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:41.551 [global] 00:09:41.551 thread=1 00:09:41.551 invalidate=1 00:09:41.551 rw=write 00:09:41.551 time_based=1 00:09:41.551 runtime=1 00:09:41.551 ioengine=libaio 00:09:41.551 direct=1 00:09:41.551 bs=4096 00:09:41.551 iodepth=1 00:09:41.551 norandommap=0 00:09:41.551 numjobs=1 00:09:41.551 00:09:41.551 verify_dump=1 00:09:41.551 verify_backlog=512 00:09:41.551 verify_state_save=0 00:09:41.551 do_verify=1 00:09:41.551 verify=crc32c-intel 00:09:41.810 [job0] 00:09:41.810 filename=/dev/nvme0n1 00:09:41.810 Could not set queue depth (nvme0n1) 00:09:41.810 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:41.810 fio-3.35 00:09:41.810 Starting 1 thread 00:09:43.196 00:09:43.196 job0: (groupid=0, jobs=1): err= 0: pid=65837: Fri Nov 15 10:53:29 2024 00:09:43.196 read: IOPS=2698, BW=10.5MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:09:43.197 slat (nsec): min=10549, max=60053, avg=12701.84, stdev=3259.46 00:09:43.197 clat (usec): min=129, max=322, avg=195.56, stdev=28.95 00:09:43.197 lat (usec): min=140, max=336, avg=208.26, stdev=29.23 00:09:43.197 clat percentiles (usec): 00:09:43.197 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 172], 00:09:43.197 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:09:43.197 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 247], 00:09:43.197 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 310], 00:09:43.197 | 99.99th=[ 322] 00:09:43.197 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:43.197 slat (usec): min=13, max=117, avg=19.15, stdev= 5.85 00:09:43.197 clat (usec): min=79, max=707, avg=120.58, stdev=26.93 00:09:43.197 lat (usec): min=94, max=734, avg=139.73, stdev=28.88 00:09:43.197 clat percentiles (usec): 00:09:43.197 | 1.00th=[ 84], 5.00th=[ 90], 10.00th=[ 94], 20.00th=[ 99], 00:09:43.197 | 30.00th=[ 105], 40.00th=[ 112], 50.00th=[ 118], 60.00th=[ 124], 00:09:43.197 | 70.00th=[ 131], 80.00th=[ 139], 90.00th=[ 151], 95.00th=[ 163], 00:09:43.197 | 99.00th=[ 188], 99.50th=[ 198], 99.90th=[ 310], 99.95th=[ 519], 00:09:43.197 | 99.99th=[ 709] 00:09:43.197 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:43.197 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:43.197 lat (usec) : 100=11.40%, 250=86.52%, 500=2.04%, 750=0.03% 00:09:43.197 cpu : usr=1.20%, sys=8.00%, ctx=5776, majf=0, minf=5 00:09:43.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:43.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:43.197 issued rwts: total=2701,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:43.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:43.197 00:09:43.197 Run status group 0 (all jobs): 00:09:43.197 READ: bw=10.5MiB/s (11.1MB/s), 10.5MiB/s-10.5MiB/s (11.1MB/s-11.1MB/s), io=10.6MiB (11.1MB), run=1001-1001msec 00:09:43.197 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:43.197 00:09:43.197 Disk stats (read/write): 00:09:43.197 nvme0n1: ios=2610/2571, merge=0/0, ticks=538/356, in_queue=894, util=91.48% 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:43.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.197 rmmod nvme_tcp 00:09:43.197 rmmod nvme_fabrics 00:09:43.197 rmmod nvme_keyring 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65746 ']' 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65746 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65746 ']' 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65746 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65746 00:09:43.197 killing process with pid 65746 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65746' 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65746 00:09:43.197 10:53:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65746 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:43.456 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:43.715 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:43.715 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:43.715 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.715 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.715 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:43.716 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.716 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.716 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.716 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:43.716 00:09:43.716 real 0m6.374s 00:09:43.716 user 0m19.843s 00:09:43.716 sys 0m2.058s 00:09:43.716 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.716 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.716 ************************************ 00:09:43.716 END TEST nvmf_nmic 00:09:43.716 ************************************ 00:09:43.716 10:53:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:43.716 10:53:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.716 10:53:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.716 10:53:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.716 ************************************ 00:09:43.716 START TEST nvmf_fio_target 00:09:43.716 ************************************ 00:09:43.716 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:43.976 * Looking for test storage... 00:09:43.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.976 --rc genhtml_branch_coverage=1 00:09:43.976 --rc genhtml_function_coverage=1 00:09:43.976 --rc genhtml_legend=1 00:09:43.976 --rc geninfo_all_blocks=1 00:09:43.976 --rc geninfo_unexecuted_blocks=1 00:09:43.976 00:09:43.976 ' 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.976 --rc genhtml_branch_coverage=1 00:09:43.976 --rc genhtml_function_coverage=1 00:09:43.976 --rc genhtml_legend=1 00:09:43.976 --rc geninfo_all_blocks=1 00:09:43.976 --rc geninfo_unexecuted_blocks=1 00:09:43.976 00:09:43.976 ' 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.976 --rc genhtml_branch_coverage=1 00:09:43.976 --rc genhtml_function_coverage=1 00:09:43.976 --rc genhtml_legend=1 00:09:43.976 --rc geninfo_all_blocks=1 00:09:43.976 --rc geninfo_unexecuted_blocks=1 00:09:43.976 00:09:43.976 ' 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.976 --rc genhtml_branch_coverage=1 00:09:43.976 --rc genhtml_function_coverage=1 00:09:43.976 --rc genhtml_legend=1 00:09:43.976 --rc geninfo_all_blocks=1 00:09:43.976 --rc geninfo_unexecuted_blocks=1 00:09:43.976 00:09:43.976 ' 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.976 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.977 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:43.977 Cannot find device "nvmf_init_br" 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:43.977 Cannot find device "nvmf_init_br2" 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:43.977 Cannot find device "nvmf_tgt_br" 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.977 Cannot find device "nvmf_tgt_br2" 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:43.977 Cannot find device "nvmf_init_br" 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:43.977 Cannot find device "nvmf_init_br2" 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:43.977 Cannot find device "nvmf_tgt_br" 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:43.977 Cannot find device "nvmf_tgt_br2" 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:43.977 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:44.236 Cannot find device "nvmf_br" 00:09:44.236 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:44.236 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:44.236 Cannot find device "nvmf_init_if" 00:09:44.236 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:44.236 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:44.236 Cannot find device "nvmf_init_if2" 00:09:44.236 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:44.236 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:44.237 10:53:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:44.237 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:44.496 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:44.496 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:09:44.496 00:09:44.496 --- 10.0.0.3 ping statistics --- 00:09:44.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.496 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:44.496 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:44.496 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:09:44.496 00:09:44.496 --- 10.0.0.4 ping statistics --- 00:09:44.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.496 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:44.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:44.496 00:09:44.496 --- 10.0.0.1 ping statistics --- 00:09:44.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.496 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:44.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:09:44.496 00:09:44.496 --- 10.0.0.2 ping statistics --- 00:09:44.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.496 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66070 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66070 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66070 ']' 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.496 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.496 [2024-11-15 10:53:31.241555] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:09:44.496 [2024-11-15 10:53:31.241651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.755 [2024-11-15 10:53:31.393277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.755 [2024-11-15 10:53:31.450640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.755 [2024-11-15 10:53:31.450708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.755 [2024-11-15 10:53:31.450722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.755 [2024-11-15 10:53:31.450733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.755 [2024-11-15 10:53:31.450743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.755 [2024-11-15 10:53:31.452052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.755 [2024-11-15 10:53:31.452191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.755 [2024-11-15 10:53:31.452325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.755 [2024-11-15 10:53:31.452334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.755 [2024-11-15 10:53:31.509239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.755 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.755 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:44.755 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:44.755 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.755 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.014 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.014 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:45.273 [2024-11-15 10:53:31.920639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.273 10:53:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.564 10:53:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:45.564 10:53:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:45.823 10:53:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:45.823 10:53:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.082 10:53:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:46.082 10:53:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.341 10:53:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:46.341 10:53:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:46.600 10:53:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:46.859 10:53:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:46.859 10:53:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.119 10:53:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:47.119 10:53:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:47.378 10:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:47.378 10:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:47.637 10:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:47.895 10:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:47.895 10:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.153 10:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:48.153 10:53:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:48.412 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:48.671 [2024-11-15 10:53:35.296713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:48.671 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:48.930 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:48.930 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid=02f14d39-9b07-4abc-bc4a-e88d43a336ca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:49.188 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:49.188 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:49.188 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.188 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:49.188 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:49.188 10:53:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:51.093 10:53:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:51.093 10:53:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:51.093 10:53:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.093 10:53:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:51.093 10:53:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.093 10:53:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:51.093 10:53:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:51.093 [global] 00:09:51.093 thread=1 00:09:51.093 invalidate=1 00:09:51.093 rw=write 00:09:51.093 time_based=1 00:09:51.093 runtime=1 00:09:51.093 ioengine=libaio 00:09:51.093 direct=1 00:09:51.093 bs=4096 00:09:51.093 iodepth=1 00:09:51.093 norandommap=0 00:09:51.093 numjobs=1 00:09:51.093 00:09:51.093 verify_dump=1 00:09:51.093 verify_backlog=512 00:09:51.093 verify_state_save=0 00:09:51.093 do_verify=1 00:09:51.093 verify=crc32c-intel 00:09:51.093 [job0] 00:09:51.093 filename=/dev/nvme0n1 00:09:51.093 [job1] 00:09:51.093 filename=/dev/nvme0n2 00:09:51.093 [job2] 00:09:51.093 filename=/dev/nvme0n3 00:09:51.093 [job3] 00:09:51.093 filename=/dev/nvme0n4 00:09:51.376 Could not set queue depth (nvme0n1) 00:09:51.376 Could not set queue depth (nvme0n2) 00:09:51.376 Could not set queue depth (nvme0n3) 00:09:51.376 Could not set queue depth (nvme0n4) 00:09:51.376 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.376 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.376 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.376 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.376 fio-3.35 00:09:51.376 Starting 4 threads 00:09:52.764 00:09:52.764 job0: (groupid=0, jobs=1): err= 0: pid=66247: Fri Nov 15 10:53:39 2024 00:09:52.764 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:52.764 slat (nsec): min=10427, max=55896, avg=14465.39, stdev=5260.57 00:09:52.764 clat (usec): min=183, max=743, avg=247.15, stdev=34.27 00:09:52.764 lat (usec): min=196, max=761, avg=261.61, stdev=35.05 00:09:52.764 clat percentiles (usec): 00:09:52.764 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 223], 00:09:52.764 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:09:52.764 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 302], 00:09:52.764 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 553], 99.95th=[ 701], 00:09:52.764 | 99.99th=[ 742] 00:09:52.764 write: IOPS=2231, BW=8927KiB/s (9141kB/s)(8936KiB/1001msec); 0 zone resets 00:09:52.764 slat (nsec): min=14860, max=93630, avg=20852.13, stdev=6505.98 00:09:52.764 clat (usec): min=130, max=388, avg=183.78, stdev=27.12 00:09:52.764 lat (usec): min=148, max=481, avg=204.64, stdev=28.73 00:09:52.764 clat percentiles (usec): 00:09:52.764 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 161], 00:09:52.764 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:09:52.764 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 221], 95.00th=[ 235], 00:09:52.764 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 310], 00:09:52.764 | 99.99th=[ 388] 00:09:52.764 bw ( KiB/s): min= 9045, max= 9045, per=28.78%, avg=9045.00, stdev= 0.00, samples=1 00:09:52.764 iops : min= 2261, max= 2261, avg=2261.00, stdev= 0.00, samples=1 00:09:52.764 lat (usec) : 250=79.92%, 500=19.99%, 750=0.09% 00:09:52.764 cpu : usr=2.00%, sys=5.60%, ctx=4285, majf=0, minf=5 00:09:52.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.764 issued rwts: total=2048,2234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.764 job1: (groupid=0, jobs=1): err= 0: pid=66248: Fri Nov 15 10:53:39 2024 00:09:52.764 read: IOPS=1429, BW=5718KiB/s (5856kB/s)(5724KiB/1001msec) 00:09:52.764 slat (nsec): min=15838, max=82177, avg=25567.87, stdev=8755.75 00:09:52.764 clat (usec): min=191, max=819, avg=354.11, stdev=73.80 00:09:52.764 lat (usec): min=207, max=857, avg=379.68, stdev=77.21 00:09:52.764 clat percentiles (usec): 00:09:52.764 | 1.00th=[ 245], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 306], 00:09:52.764 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 355], 00:09:52.764 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 420], 95.00th=[ 474], 00:09:52.764 | 99.00th=[ 668], 99.50th=[ 709], 99.90th=[ 791], 99.95th=[ 824], 00:09:52.764 | 99.99th=[ 824] 00:09:52.764 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:52.764 slat (usec): min=23, max=171, avg=36.07, stdev= 9.63 00:09:52.764 clat (usec): min=116, max=838, avg=255.33, stdev=63.10 00:09:52.764 lat (usec): min=149, max=874, avg=291.40, stdev=64.98 00:09:52.764 clat percentiles (usec): 00:09:52.764 | 1.00th=[ 133], 5.00th=[ 172], 10.00th=[ 194], 20.00th=[ 210], 00:09:52.764 | 30.00th=[ 223], 40.00th=[ 235], 50.00th=[ 249], 60.00th=[ 262], 00:09:52.764 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 326], 95.00th=[ 383], 00:09:52.764 | 99.00th=[ 433], 99.50th=[ 486], 99.90th=[ 783], 99.95th=[ 840], 00:09:52.764 | 99.99th=[ 840] 00:09:52.764 bw ( KiB/s): min= 8175, max= 8175, per=26.01%, avg=8175.00, stdev= 0.00, samples=1 00:09:52.764 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:52.764 lat (usec) : 250=27.33%, 500=70.41%, 750=2.09%, 1000=0.17% 00:09:52.764 cpu : usr=2.80%, sys=6.50%, ctx=2967, majf=0, minf=9 00:09:52.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.765 issued rwts: total=1431,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.765 job2: (groupid=0, jobs=1): err= 0: pid=66249: Fri Nov 15 10:53:39 2024 00:09:52.765 read: IOPS=2206, BW=8827KiB/s (9039kB/s)(8836KiB/1001msec) 00:09:52.765 slat (nsec): min=10794, max=44659, avg=14363.91, stdev=4061.34 00:09:52.765 clat (usec): min=142, max=2186, avg=216.36, stdev=55.07 00:09:52.765 lat (usec): min=162, max=2204, avg=230.72, stdev=55.46 00:09:52.765 clat percentiles (usec): 00:09:52.765 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 188], 00:09:52.765 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 221], 00:09:52.765 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 260], 95.00th=[ 273], 00:09:52.765 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 644], 99.95th=[ 816], 00:09:52.765 | 99.99th=[ 2180] 00:09:52.765 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:52.765 slat (nsec): min=13579, max=91281, avg=21842.57, stdev=7088.45 00:09:52.765 clat (usec): min=99, max=301, avg=166.55, stdev=29.90 00:09:52.765 lat (usec): min=116, max=353, avg=188.39, stdev=31.23 00:09:52.765 clat percentiles (usec): 00:09:52.765 | 1.00th=[ 116], 5.00th=[ 125], 10.00th=[ 133], 20.00th=[ 143], 00:09:52.765 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 169], 00:09:52.765 | 70.00th=[ 178], 80.00th=[ 190], 90.00th=[ 208], 95.00th=[ 223], 00:09:52.765 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 289], 99.95th=[ 297], 00:09:52.765 | 99.99th=[ 302] 00:09:52.765 bw ( KiB/s): min=10794, max=10794, per=34.34%, avg=10794.00, stdev= 0.00, samples=1 00:09:52.765 iops : min= 2698, max= 2698, avg=2698.00, stdev= 0.00, samples=1 00:09:52.765 lat (usec) : 100=0.02%, 250=92.85%, 500=7.05%, 750=0.04%, 1000=0.02% 00:09:52.765 lat (msec) : 4=0.02% 00:09:52.765 cpu : usr=1.40%, sys=7.50%, ctx=4769, majf=0, minf=13 00:09:52.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.765 issued rwts: total=2209,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.765 job3: (groupid=0, jobs=1): err= 0: pid=66250: Fri Nov 15 10:53:39 2024 00:09:52.765 read: IOPS=1392, BW=5570KiB/s (5704kB/s)(5576KiB/1001msec) 00:09:52.765 slat (nsec): min=15355, max=84163, avg=23487.12, stdev=7164.41 00:09:52.765 clat (usec): min=193, max=2193, avg=349.91, stdev=72.81 00:09:52.765 lat (usec): min=215, max=2211, avg=373.39, stdev=74.33 00:09:52.765 clat percentiles (usec): 00:09:52.765 | 1.00th=[ 249], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 310], 00:09:52.765 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 355], 00:09:52.765 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 408], 95.00th=[ 449], 00:09:52.765 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 734], 99.95th=[ 2180], 00:09:52.765 | 99.99th=[ 2180] 00:09:52.765 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:52.765 slat (usec): min=21, max=132, avg=36.22, stdev=10.67 00:09:52.765 clat (usec): min=123, max=870, avg=270.24, stdev=81.24 00:09:52.765 lat (usec): min=153, max=914, avg=306.46, stdev=86.23 00:09:52.765 clat percentiles (usec): 00:09:52.765 | 1.00th=[ 143], 5.00th=[ 180], 10.00th=[ 198], 20.00th=[ 215], 00:09:52.765 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 265], 00:09:52.765 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 392], 95.00th=[ 449], 00:09:52.765 | 99.00th=[ 545], 99.50th=[ 578], 99.90th=[ 693], 99.95th=[ 873], 00:09:52.765 | 99.99th=[ 873] 00:09:52.765 bw ( KiB/s): min= 8175, max= 8175, per=26.01%, avg=8175.00, stdev= 0.00, samples=1 00:09:52.765 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:52.765 lat (usec) : 250=25.94%, 500=72.15%, 750=1.84%, 1000=0.03% 00:09:52.765 lat (msec) : 4=0.03% 00:09:52.765 cpu : usr=2.40%, sys=6.60%, ctx=2930, majf=0, minf=10 00:09:52.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.765 issued rwts: total=1394,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.765 00:09:52.765 Run status group 0 (all jobs): 00:09:52.765 READ: bw=27.6MiB/s (29.0MB/s), 5570KiB/s-8827KiB/s (5704kB/s-9039kB/s), io=27.7MiB (29.0MB), run=1001-1001msec 00:09:52.765 WRITE: bw=30.7MiB/s (32.2MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=30.7MiB (32.2MB), run=1001-1001msec 00:09:52.765 00:09:52.765 Disk stats (read/write): 00:09:52.765 nvme0n1: ios=1715/2048, merge=0/0, ticks=450/394, in_queue=844, util=88.18% 00:09:52.765 nvme0n2: ios=1095/1536, merge=0/0, ticks=402/412, in_queue=814, util=88.22% 00:09:52.765 nvme0n3: ios=1983/2048, merge=0/0, ticks=437/370, in_queue=807, util=89.12% 00:09:52.765 nvme0n4: ios=1024/1514, merge=0/0, ticks=368/426, in_queue=794, util=89.77% 00:09:52.765 10:53:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:52.765 [global] 00:09:52.765 thread=1 00:09:52.765 invalidate=1 00:09:52.765 rw=randwrite 00:09:52.765 time_based=1 00:09:52.765 runtime=1 00:09:52.765 ioengine=libaio 00:09:52.765 direct=1 00:09:52.765 bs=4096 00:09:52.765 iodepth=1 00:09:52.765 norandommap=0 00:09:52.765 numjobs=1 00:09:52.765 00:09:52.765 verify_dump=1 00:09:52.765 verify_backlog=512 00:09:52.765 verify_state_save=0 00:09:52.765 do_verify=1 00:09:52.765 verify=crc32c-intel 00:09:52.765 [job0] 00:09:52.765 filename=/dev/nvme0n1 00:09:52.765 [job1] 00:09:52.765 filename=/dev/nvme0n2 00:09:52.765 [job2] 00:09:52.765 filename=/dev/nvme0n3 00:09:52.765 [job3] 00:09:52.765 filename=/dev/nvme0n4 00:09:52.765 Could not set queue depth (nvme0n1) 00:09:52.765 Could not set queue depth (nvme0n2) 00:09:52.765 Could not set queue depth (nvme0n3) 00:09:52.765 Could not set queue depth (nvme0n4) 00:09:52.765 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.765 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.765 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.765 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:52.765 fio-3.35 00:09:52.765 Starting 4 threads 00:09:54.143 00:09:54.143 job0: (groupid=0, jobs=1): err= 0: pid=66308: Fri Nov 15 10:53:40 2024 00:09:54.143 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:54.143 slat (nsec): min=11924, max=40427, avg=16893.93, stdev=2985.67 00:09:54.143 clat (usec): min=150, max=849, avg=313.22, stdev=49.08 00:09:54.143 lat (usec): min=170, max=864, avg=330.12, stdev=49.21 00:09:54.143 clat percentiles (usec): 00:09:54.143 | 1.00th=[ 243], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 281], 00:09:54.143 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:09:54.143 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 371], 00:09:54.143 | 99.00th=[ 519], 99.50th=[ 619], 99.90th=[ 717], 99.95th=[ 848], 00:09:54.143 | 99.99th=[ 848] 00:09:54.143 write: IOPS=1708, BW=6833KiB/s (6997kB/s)(6840KiB/1001msec); 0 zone resets 00:09:54.143 slat (usec): min=17, max=128, avg=30.29, stdev= 7.65 00:09:54.143 clat (usec): min=111, max=847, avg=253.27, stdev=58.70 00:09:54.143 lat (usec): min=134, max=881, avg=283.56, stdev=61.61 00:09:54.143 clat percentiles (usec): 00:09:54.143 | 1.00th=[ 133], 5.00th=[ 186], 10.00th=[ 206], 20.00th=[ 221], 00:09:54.143 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 00:09:54.143 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 379], 00:09:54.143 | 99.00th=[ 478], 99.50th=[ 502], 99.90th=[ 545], 99.95th=[ 848], 00:09:54.143 | 99.99th=[ 848] 00:09:54.143 bw ( KiB/s): min= 8192, max= 8192, per=24.57%, avg=8192.00, stdev= 0.00, samples=1 00:09:54.143 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:54.143 lat (usec) : 250=30.50%, 500=68.73%, 750=0.71%, 1000=0.06% 00:09:54.144 cpu : usr=1.30%, sys=6.60%, ctx=3247, majf=0, minf=17 00:09:54.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.144 issued rwts: total=1536,1710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.144 job1: (groupid=0, jobs=1): err= 0: pid=66309: Fri Nov 15 10:53:40 2024 00:09:54.144 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:54.144 slat (nsec): min=13112, max=45405, avg=16151.77, stdev=3150.73 00:09:54.144 clat (usec): min=186, max=396, avg=238.81, stdev=21.92 00:09:54.144 lat (usec): min=201, max=412, avg=254.97, stdev=22.41 00:09:54.144 clat percentiles (usec): 00:09:54.144 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:09:54.144 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:09:54.144 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 281], 00:09:54.144 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 326], 99.95th=[ 338], 00:09:54.144 | 99.99th=[ 396] 00:09:54.144 write: IOPS=2305, BW=9223KiB/s (9444kB/s)(9232KiB/1001msec); 0 zone resets 00:09:54.144 slat (usec): min=15, max=536, avg=23.58, stdev=11.85 00:09:54.144 clat (usec): min=127, max=2247, avg=179.99, stdev=62.39 00:09:54.144 lat (usec): min=148, max=2268, avg=203.57, stdev=63.75 00:09:54.144 clat percentiles (usec): 00:09:54.144 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:09:54.144 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 182], 00:09:54.144 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 210], 00:09:54.144 | 99.00th=[ 247], 99.50th=[ 273], 99.90th=[ 611], 99.95th=[ 2008], 00:09:54.144 | 99.99th=[ 2245] 00:09:54.144 bw ( KiB/s): min= 8936, max= 8936, per=26.80%, avg=8936.00, stdev= 0.00, samples=1 00:09:54.144 iops : min= 2234, max= 2234, avg=2234.00, stdev= 0.00, samples=1 00:09:54.144 lat (usec) : 250=86.55%, 500=13.36%, 750=0.05% 00:09:54.144 lat (msec) : 4=0.05% 00:09:54.144 cpu : usr=1.70%, sys=6.80%, ctx=4361, majf=0, minf=11 00:09:54.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.144 issued rwts: total=2048,2308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.144 job2: (groupid=0, jobs=1): err= 0: pid=66310: Fri Nov 15 10:53:40 2024 00:09:54.144 read: IOPS=2372, BW=9488KiB/s (9716kB/s)(9488KiB/1000msec) 00:09:54.144 slat (usec): min=11, max=156, avg=13.94, stdev= 5.61 00:09:54.144 clat (usec): min=154, max=6452, avg=209.72, stdev=156.15 00:09:54.144 lat (usec): min=167, max=6464, avg=223.66, stdev=156.38 00:09:54.144 clat percentiles (usec): 00:09:54.144 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:09:54.144 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:09:54.144 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 239], 95.00th=[ 253], 00:09:54.144 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 2409], 99.95th=[ 3556], 00:09:54.144 | 99.99th=[ 6456] 00:09:54.144 write: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec); 0 zone resets 00:09:54.144 slat (usec): min=13, max=3026, avg=20.90, stdev=59.61 00:09:54.144 clat (usec): min=14, max=4124, avg=159.21, stdev=90.71 00:09:54.144 lat (usec): min=122, max=4180, avg=180.11, stdev=107.88 00:09:54.144 clat percentiles (usec): 00:09:54.144 | 1.00th=[ 115], 5.00th=[ 123], 10.00th=[ 129], 20.00th=[ 137], 00:09:54.144 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 159], 00:09:54.144 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 188], 95.00th=[ 198], 00:09:54.144 | 99.00th=[ 241], 99.50th=[ 351], 99.90th=[ 963], 99.95th=[ 1532], 00:09:54.144 | 99.99th=[ 4113] 00:09:54.144 bw ( KiB/s): min=12080, max=12080, per=36.23%, avg=12080.00, stdev= 0.00, samples=1 00:09:54.144 iops : min= 3020, max= 3020, avg=3020.00, stdev= 0.00, samples=1 00:09:54.144 lat (usec) : 20=0.02%, 250=97.02%, 500=2.76%, 750=0.04%, 1000=0.04% 00:09:54.144 lat (msec) : 2=0.04%, 4=0.04%, 10=0.04% 00:09:54.144 cpu : usr=2.00%, sys=6.60%, ctx=4941, majf=0, minf=13 00:09:54.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.144 issued rwts: total=2372,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.144 job3: (groupid=0, jobs=1): err= 0: pid=66311: Fri Nov 15 10:53:40 2024 00:09:54.144 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:54.144 slat (nsec): min=14605, max=73633, avg=20354.12, stdev=4674.24 00:09:54.144 clat (usec): min=182, max=1111, avg=314.02, stdev=58.21 00:09:54.144 lat (usec): min=202, max=1128, avg=334.37, stdev=60.19 00:09:54.144 clat percentiles (usec): 00:09:54.144 | 1.00th=[ 239], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 277], 00:09:54.144 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 318], 00:09:54.144 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 379], 00:09:54.144 | 99.00th=[ 578], 99.50th=[ 635], 99.90th=[ 693], 99.95th=[ 1106], 00:09:54.144 | 99.99th=[ 1106] 00:09:54.144 write: IOPS=1763, BW=7053KiB/s (7222kB/s)(7060KiB/1001msec); 0 zone resets 00:09:54.144 slat (usec): min=18, max=115, avg=30.57, stdev= 7.06 00:09:54.144 clat (usec): min=109, max=967, avg=240.59, stdev=42.11 00:09:54.144 lat (usec): min=132, max=1001, avg=271.16, stdev=42.89 00:09:54.144 clat percentiles (usec): 00:09:54.144 | 1.00th=[ 141], 5.00th=[ 180], 10.00th=[ 198], 20.00th=[ 212], 00:09:54.144 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 249], 00:09:54.144 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 302], 00:09:54.144 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 420], 99.95th=[ 971], 00:09:54.144 | 99.99th=[ 971] 00:09:54.144 bw ( KiB/s): min= 8192, max= 8192, per=24.57%, avg=8192.00, stdev= 0.00, samples=1 00:09:54.144 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:54.144 lat (usec) : 250=34.96%, 500=63.89%, 750=1.09%, 1000=0.03% 00:09:54.144 lat (msec) : 2=0.03% 00:09:54.144 cpu : usr=2.20%, sys=6.50%, ctx=3301, majf=0, minf=7 00:09:54.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.144 issued rwts: total=1536,1765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.144 00:09:54.144 Run status group 0 (all jobs): 00:09:54.144 READ: bw=29.2MiB/s (30.7MB/s), 6138KiB/s-9488KiB/s (6285kB/s-9716kB/s), io=29.3MiB (30.7MB), run=1000-1001msec 00:09:54.144 WRITE: bw=32.6MiB/s (34.1MB/s), 6833KiB/s-10.0MiB/s (6997kB/s-10.5MB/s), io=32.6MiB (34.2MB), run=1000-1001msec 00:09:54.144 00:09:54.144 Disk stats (read/write): 00:09:54.144 nvme0n1: ios=1330/1536, merge=0/0, ticks=451/405, in_queue=856, util=88.28% 00:09:54.144 nvme0n2: ios=1750/2048, merge=0/0, ticks=428/386, in_queue=814, util=87.96% 00:09:54.144 nvme0n3: ios=2048/2206, merge=0/0, ticks=418/355, in_queue=773, util=88.41% 00:09:54.144 nvme0n4: ios=1305/1536, merge=0/0, ticks=420/385, in_queue=805, util=89.78% 00:09:54.144 10:53:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:54.144 [global] 00:09:54.144 thread=1 00:09:54.144 invalidate=1 00:09:54.144 rw=write 00:09:54.144 time_based=1 00:09:54.144 runtime=1 00:09:54.144 ioengine=libaio 00:09:54.144 direct=1 00:09:54.144 bs=4096 00:09:54.144 iodepth=128 00:09:54.144 norandommap=0 00:09:54.144 numjobs=1 00:09:54.144 00:09:54.144 verify_dump=1 00:09:54.144 verify_backlog=512 00:09:54.144 verify_state_save=0 00:09:54.144 do_verify=1 00:09:54.144 verify=crc32c-intel 00:09:54.144 [job0] 00:09:54.144 filename=/dev/nvme0n1 00:09:54.144 [job1] 00:09:54.144 filename=/dev/nvme0n2 00:09:54.144 [job2] 00:09:54.144 filename=/dev/nvme0n3 00:09:54.144 [job3] 00:09:54.144 filename=/dev/nvme0n4 00:09:54.144 Could not set queue depth (nvme0n1) 00:09:54.144 Could not set queue depth (nvme0n2) 00:09:54.144 Could not set queue depth (nvme0n3) 00:09:54.144 Could not set queue depth (nvme0n4) 00:09:54.144 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.144 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.144 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.144 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.144 fio-3.35 00:09:54.144 Starting 4 threads 00:09:55.523 00:09:55.523 job0: (groupid=0, jobs=1): err= 0: pid=66364: Fri Nov 15 10:53:42 2024 00:09:55.523 read: IOPS=1804, BW=7218KiB/s (7392kB/s)(7240KiB/1003msec) 00:09:55.523 slat (usec): min=4, max=9997, avg=265.07, stdev=1083.51 00:09:55.523 clat (usec): min=809, max=46101, avg=30854.63, stdev=5846.14 00:09:55.523 lat (usec): min=4648, max=46177, avg=31119.70, stdev=5917.06 00:09:55.523 clat percentiles (usec): 00:09:55.523 | 1.00th=[ 5014], 5.00th=[22676], 10.00th=[25560], 20.00th=[28443], 00:09:55.523 | 30.00th=[30016], 40.00th=[30540], 50.00th=[31327], 60.00th=[31851], 00:09:55.523 | 70.00th=[32637], 80.00th=[33817], 90.00th=[36439], 95.00th=[40109], 00:09:55.523 | 99.00th=[43254], 99.50th=[44303], 99.90th=[45351], 99.95th=[45876], 00:09:55.523 | 99.99th=[45876] 00:09:55.523 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:09:55.523 slat (usec): min=12, max=14892, avg=245.38, stdev=900.82 00:09:55.523 clat (usec): min=22852, max=47743, avg=33955.33, stdev=3966.42 00:09:55.523 lat (usec): min=22882, max=47781, avg=34200.71, stdev=3995.87 00:09:55.523 clat percentiles (usec): 00:09:55.523 | 1.00th=[26084], 5.00th=[28967], 10.00th=[29492], 20.00th=[31589], 00:09:55.523 | 30.00th=[32375], 40.00th=[32637], 50.00th=[33162], 60.00th=[33817], 00:09:55.523 | 70.00th=[34341], 80.00th=[35914], 90.00th=[40109], 95.00th=[42730], 00:09:55.523 | 99.00th=[45876], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:09:55.523 | 99.99th=[47973] 00:09:55.523 bw ( KiB/s): min= 8192, max= 8208, per=17.79%, avg=8200.00, stdev=11.31, samples=2 00:09:55.523 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:09:55.523 lat (usec) : 1000=0.03% 00:09:55.523 lat (msec) : 10=1.09%, 20=1.06%, 50=97.82% 00:09:55.524 cpu : usr=2.30%, sys=6.89%, ctx=289, majf=0, minf=15 00:09:55.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:09:55.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.524 issued rwts: total=1810,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.524 job1: (groupid=0, jobs=1): err= 0: pid=66365: Fri Nov 15 10:53:42 2024 00:09:55.524 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:09:55.524 slat (usec): min=6, max=4638, avg=117.37, stdev=568.88 00:09:55.524 clat (usec): min=3870, max=18128, avg=15470.72, stdev=1554.96 00:09:55.524 lat (usec): min=3883, max=18141, avg=15588.09, stdev=1456.27 00:09:55.524 clat percentiles (usec): 00:09:55.524 | 1.00th=[ 8291], 5.00th=[13304], 10.00th=[14484], 20.00th=[15008], 00:09:55.524 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15664], 60.00th=[15795], 00:09:55.524 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16712], 95.00th=[17433], 00:09:55.524 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:09:55.524 | 99.99th=[18220] 00:09:55.524 write: IOPS=4095, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:55.524 slat (usec): min=12, max=4027, avg=117.60, stdev=523.55 00:09:55.524 clat (usec): min=648, max=17660, avg=15370.31, stdev=971.43 00:09:55.524 lat (usec): min=671, max=17685, avg=15487.91, stdev=812.53 00:09:55.524 clat percentiles (usec): 00:09:55.524 | 1.00th=[11994], 5.00th=[14222], 10.00th=[14484], 20.00th=[14877], 00:09:55.524 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:09:55.524 | 70.00th=[15795], 80.00th=[15926], 90.00th=[16319], 95.00th=[16909], 00:09:55.524 | 99.00th=[17433], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:09:55.524 | 99.99th=[17695] 00:09:55.524 bw ( KiB/s): min=16384, max=16384, per=35.55%, avg=16384.00, stdev= 0.00, samples=1 00:09:55.524 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:55.524 lat (usec) : 750=0.04%, 1000=0.01% 00:09:55.524 lat (msec) : 4=0.09%, 10=0.70%, 20=99.17% 00:09:55.524 cpu : usr=4.10%, sys=12.50%, ctx=260, majf=0, minf=7 00:09:55.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:55.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.524 issued rwts: total=4096,4100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.524 job2: (groupid=0, jobs=1): err= 0: pid=66371: Fri Nov 15 10:53:42 2024 00:09:55.524 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:55.524 slat (usec): min=7, max=5048, avg=150.72, stdev=750.00 00:09:55.524 clat (usec): min=13825, max=22032, avg=19952.41, stdev=1036.73 00:09:55.524 lat (usec): min=18505, max=22056, avg=20103.13, stdev=722.13 00:09:55.524 clat percentiles (usec): 00:09:55.524 | 1.00th=[15533], 5.00th=[18744], 10.00th=[19006], 20.00th=[19268], 00:09:55.524 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20055], 60.00th=[20317], 00:09:55.524 | 70.00th=[20579], 80.00th=[20579], 90.00th=[21103], 95.00th=[21365], 00:09:55.524 | 99.00th=[21890], 99.50th=[21890], 99.90th=[22152], 99.95th=[22152], 00:09:55.524 | 99.99th=[22152] 00:09:55.524 write: IOPS=3357, BW=13.1MiB/s (13.8MB/s)(13.1MiB/1001msec); 0 zone resets 00:09:55.524 slat (usec): min=11, max=4716, avg=150.98, stdev=700.40 00:09:55.524 clat (usec): min=306, max=21421, avg=19289.91, stdev=2081.38 00:09:55.524 lat (usec): min=4448, max=21484, avg=19440.90, stdev=1961.79 00:09:55.524 clat percentiles (usec): 00:09:55.524 | 1.00th=[ 9372], 5.00th=[16188], 10.00th=[18744], 20.00th=[19006], 00:09:55.524 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19792], 00:09:55.524 | 70.00th=[20055], 80.00th=[20317], 90.00th=[20841], 95.00th=[21103], 00:09:55.524 | 99.00th=[21365], 99.50th=[21365], 99.90th=[21365], 99.95th=[21365], 00:09:55.524 | 99.99th=[21365] 00:09:55.524 bw ( KiB/s): min=12320, max=12320, per=26.73%, avg=12320.00, stdev= 0.00, samples=1 00:09:55.524 iops : min= 3080, max= 3080, avg=3080.00, stdev= 0.00, samples=1 00:09:55.524 lat (usec) : 500=0.02% 00:09:55.524 lat (msec) : 10=0.85%, 20=56.23%, 50=42.90% 00:09:55.524 cpu : usr=3.80%, sys=9.70%, ctx=225, majf=0, minf=19 00:09:55.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:55.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.524 issued rwts: total=3072,3361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.524 job3: (groupid=0, jobs=1): err= 0: pid=66372: Fri Nov 15 10:53:42 2024 00:09:55.524 read: IOPS=1824, BW=7298KiB/s (7473kB/s)(7320KiB/1003msec) 00:09:55.524 slat (usec): min=5, max=10989, avg=261.81, stdev=1094.23 00:09:55.524 clat (usec): min=837, max=44784, avg=31274.60, stdev=5445.52 00:09:55.524 lat (usec): min=4724, max=44800, avg=31536.41, stdev=5513.28 00:09:55.524 clat percentiles (usec): 00:09:55.524 | 1.00th=[ 5080], 5.00th=[24773], 10.00th=[28181], 20.00th=[29230], 00:09:55.524 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31327], 60.00th=[32113], 00:09:55.524 | 70.00th=[33162], 80.00th=[33817], 90.00th=[36439], 95.00th=[38536], 00:09:55.524 | 99.00th=[41681], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:09:55.524 | 99.99th=[44827] 00:09:55.524 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:09:55.524 slat (usec): min=13, max=9785, avg=245.53, stdev=859.47 00:09:55.524 clat (usec): min=24746, max=45167, avg=33713.49, stdev=3271.26 00:09:55.524 lat (usec): min=24777, max=45231, avg=33959.03, stdev=3306.17 00:09:55.524 clat percentiles (usec): 00:09:55.524 | 1.00th=[26608], 5.00th=[28967], 10.00th=[29754], 20.00th=[31851], 00:09:55.524 | 30.00th=[32375], 40.00th=[32900], 50.00th=[33424], 60.00th=[33817], 00:09:55.524 | 70.00th=[34341], 80.00th=[35390], 90.00th=[38011], 95.00th=[40633], 00:09:55.524 | 99.00th=[42730], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:09:55.524 | 99.99th=[45351] 00:09:55.524 bw ( KiB/s): min= 8192, max= 8208, per=17.79%, avg=8200.00, stdev=11.31, samples=2 00:09:55.524 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:09:55.524 lat (usec) : 1000=0.03% 00:09:55.524 lat (msec) : 10=1.08%, 20=1.08%, 50=97.81% 00:09:55.524 cpu : usr=2.40%, sys=6.99%, ctx=289, majf=0, minf=9 00:09:55.524 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:09:55.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.524 issued rwts: total=1830,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.524 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.524 00:09:55.524 Run status group 0 (all jobs): 00:09:55.524 READ: bw=42.1MiB/s (44.1MB/s), 7218KiB/s-16.0MiB/s (7392kB/s-16.8MB/s), io=42.2MiB (44.3MB), run=1001-1003msec 00:09:55.524 WRITE: bw=45.0MiB/s (47.2MB/s), 8167KiB/s-16.0MiB/s (8364kB/s-16.8MB/s), io=45.1MiB (47.3MB), run=1001-1003msec 00:09:55.524 00:09:55.524 Disk stats (read/write): 00:09:55.524 nvme0n1: ios=1586/1783, merge=0/0, ticks=16450/18473, in_queue=34923, util=88.88% 00:09:55.524 nvme0n2: ios=3529/3584, merge=0/0, ticks=12290/12113, in_queue=24403, util=88.47% 00:09:55.524 nvme0n3: ios=2566/2944, merge=0/0, ticks=12044/12961, in_queue=25005, util=89.38% 00:09:55.524 nvme0n4: ios=1536/1791, merge=0/0, ticks=16636/18297, in_queue=34933, util=89.75% 00:09:55.524 10:53:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:55.524 [global] 00:09:55.524 thread=1 00:09:55.524 invalidate=1 00:09:55.524 rw=randwrite 00:09:55.524 time_based=1 00:09:55.524 runtime=1 00:09:55.524 ioengine=libaio 00:09:55.524 direct=1 00:09:55.524 bs=4096 00:09:55.524 iodepth=128 00:09:55.524 norandommap=0 00:09:55.524 numjobs=1 00:09:55.524 00:09:55.524 verify_dump=1 00:09:55.524 verify_backlog=512 00:09:55.524 verify_state_save=0 00:09:55.524 do_verify=1 00:09:55.524 verify=crc32c-intel 00:09:55.524 [job0] 00:09:55.524 filename=/dev/nvme0n1 00:09:55.524 [job1] 00:09:55.524 filename=/dev/nvme0n2 00:09:55.524 [job2] 00:09:55.524 filename=/dev/nvme0n3 00:09:55.524 [job3] 00:09:55.524 filename=/dev/nvme0n4 00:09:55.524 Could not set queue depth (nvme0n1) 00:09:55.524 Could not set queue depth (nvme0n2) 00:09:55.524 Could not set queue depth (nvme0n3) 00:09:55.524 Could not set queue depth (nvme0n4) 00:09:55.524 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.524 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.524 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.524 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.524 fio-3.35 00:09:55.524 Starting 4 threads 00:09:56.906 00:09:56.906 job0: (groupid=0, jobs=1): err= 0: pid=66428: Fri Nov 15 10:53:43 2024 00:09:56.906 read: IOPS=4277, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1002msec) 00:09:56.906 slat (usec): min=8, max=4316, avg=108.72, stdev=439.12 00:09:56.906 clat (usec): min=873, max=21231, avg=14379.25, stdev=2029.60 00:09:56.906 lat (usec): min=3490, max=23364, avg=14487.97, stdev=2062.68 00:09:56.906 clat percentiles (usec): 00:09:56.906 | 1.00th=[ 8029], 5.00th=[11731], 10.00th=[11994], 20.00th=[12387], 00:09:56.906 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14746], 60.00th=[15139], 00:09:56.906 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16450], 95.00th=[17171], 00:09:56.906 | 99.00th=[18482], 99.50th=[19006], 99.90th=[21103], 99.95th=[21103], 00:09:56.906 | 99.99th=[21103] 00:09:56.906 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:09:56.906 slat (usec): min=10, max=7792, avg=107.73, stdev=513.00 00:09:56.906 clat (usec): min=9829, max=23813, avg=14102.75, stdev=1966.82 00:09:56.906 lat (usec): min=9848, max=23850, avg=14210.49, stdev=2030.07 00:09:56.906 clat percentiles (usec): 00:09:56.906 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11731], 20.00th=[11994], 00:09:56.906 | 30.00th=[13042], 40.00th=[13566], 50.00th=[13960], 60.00th=[14353], 00:09:56.906 | 70.00th=[14746], 80.00th=[15795], 90.00th=[17171], 95.00th=[17695], 00:09:56.906 | 99.00th=[19006], 99.50th=[19268], 99.90th=[20317], 99.95th=[20841], 00:09:56.906 | 99.99th=[23725] 00:09:56.906 bw ( KiB/s): min=16688, max=20176, per=36.23%, avg=18432.00, stdev=2466.39, samples=2 00:09:56.906 iops : min= 4172, max= 5044, avg=4608.00, stdev=616.60, samples=2 00:09:56.906 lat (usec) : 1000=0.01% 00:09:56.906 lat (msec) : 4=0.26%, 10=0.62%, 20=98.82%, 50=0.29% 00:09:56.906 cpu : usr=3.50%, sys=13.59%, ctx=372, majf=0, minf=9 00:09:56.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:56.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.906 issued rwts: total=4286,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.906 job1: (groupid=0, jobs=1): err= 0: pid=66429: Fri Nov 15 10:53:43 2024 00:09:56.906 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:09:56.906 slat (usec): min=5, max=8186, avg=230.09, stdev=1204.78 00:09:56.906 clat (usec): min=18765, max=34159, avg=29919.94, stdev=3321.04 00:09:56.906 lat (usec): min=24361, max=34180, avg=30150.03, stdev=3120.57 00:09:56.906 clat percentiles (usec): 00:09:56.906 | 1.00th=[23462], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:09:56.906 | 30.00th=[30016], 40.00th=[31065], 50.00th=[31327], 60.00th=[31851], 00:09:56.906 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:09:56.906 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:09:56.906 | 99.99th=[34341] 00:09:56.906 write: IOPS=2263, BW=9056KiB/s (9273kB/s)(9092KiB/1004msec); 0 zone resets 00:09:56.906 slat (usec): min=10, max=8139, avg=225.17, stdev=1134.95 00:09:56.906 clat (usec): min=3386, max=33893, avg=28593.19, stdev=3957.53 00:09:56.906 lat (usec): min=9793, max=34129, avg=28818.37, stdev=3804.63 00:09:56.906 clat percentiles (usec): 00:09:56.906 | 1.00th=[10290], 5.00th=[24249], 10.00th=[24249], 20.00th=[25035], 00:09:56.906 | 30.00th=[26084], 40.00th=[29492], 50.00th=[30016], 60.00th=[30802], 00:09:56.906 | 70.00th=[31065], 80.00th=[31327], 90.00th=[32113], 95.00th=[32375], 00:09:56.906 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:09:56.906 | 99.99th=[33817] 00:09:56.906 bw ( KiB/s): min= 8200, max= 8985, per=16.89%, avg=8592.50, stdev=555.08, samples=2 00:09:56.906 iops : min= 2050, max= 2246, avg=2148.00, stdev=138.59, samples=2 00:09:56.906 lat (msec) : 4=0.02%, 10=0.19%, 20=1.97%, 50=97.82% 00:09:56.906 cpu : usr=1.50%, sys=7.58%, ctx=136, majf=0, minf=13 00:09:56.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:09:56.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.906 issued rwts: total=2048,2273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.906 job2: (groupid=0, jobs=1): err= 0: pid=66430: Fri Nov 15 10:53:43 2024 00:09:56.906 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:09:56.906 slat (usec): min=5, max=8373, avg=229.64, stdev=1208.72 00:09:56.906 clat (usec): min=18767, max=34335, avg=29992.25, stdev=3337.44 00:09:56.906 lat (usec): min=24367, max=34349, avg=30221.88, stdev=3135.32 00:09:56.906 clat percentiles (usec): 00:09:56.906 | 1.00th=[23462], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:09:56.906 | 30.00th=[30016], 40.00th=[31065], 50.00th=[31327], 60.00th=[31851], 00:09:56.906 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:09:56.906 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:09:56.906 | 99.99th=[34341] 00:09:56.906 write: IOPS=2298, BW=9192KiB/s (9413kB/s)(9220KiB/1003msec); 0 zone resets 00:09:56.906 slat (usec): min=11, max=8307, avg=223.73, stdev=1137.41 00:09:56.906 clat (usec): min=143, max=33537, avg=28102.75, stdev=4953.48 00:09:56.906 lat (usec): min=2838, max=34356, avg=28326.48, stdev=4842.85 00:09:56.906 clat percentiles (usec): 00:09:56.906 | 1.00th=[ 3425], 5.00th=[20841], 10.00th=[24249], 20.00th=[24773], 00:09:56.906 | 30.00th=[25822], 40.00th=[29230], 50.00th=[30016], 60.00th=[30540], 00:09:56.906 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31851], 95.00th=[32113], 00:09:56.906 | 99.00th=[33424], 99.50th=[33424], 99.90th=[33424], 99.95th=[33424], 00:09:56.906 | 99.99th=[33424] 00:09:56.906 bw ( KiB/s): min= 8448, max= 8968, per=17.12%, avg=8708.00, stdev=367.70, samples=2 00:09:56.906 iops : min= 2112, max= 2242, avg=2177.00, stdev=91.92, samples=2 00:09:56.906 lat (usec) : 250=0.02% 00:09:56.906 lat (msec) : 4=0.74%, 10=0.69%, 20=1.45%, 50=97.11% 00:09:56.906 cpu : usr=1.80%, sys=5.79%, ctx=142, majf=0, minf=15 00:09:56.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:09:56.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.906 issued rwts: total=2048,2305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.906 job3: (groupid=0, jobs=1): err= 0: pid=66431: Fri Nov 15 10:53:43 2024 00:09:56.906 read: IOPS=3482, BW=13.6MiB/s (14.3MB/s)(13.6MiB/1003msec) 00:09:56.906 slat (usec): min=5, max=5481, avg=134.72, stdev=564.41 00:09:56.906 clat (usec): min=702, max=24695, avg=17640.58, stdev=3387.18 00:09:56.906 lat (usec): min=2725, max=24724, avg=17775.30, stdev=3429.69 00:09:56.906 clat percentiles (usec): 00:09:56.906 | 1.00th=[ 6521], 5.00th=[12518], 10.00th=[13173], 20.00th=[13829], 00:09:56.906 | 30.00th=[15401], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:09:56.906 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20841], 95.00th=[22152], 00:09:56.906 | 99.00th=[22938], 99.50th=[23200], 99.90th=[24249], 99.95th=[24511], 00:09:56.906 | 99.99th=[24773] 00:09:56.906 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:09:56.906 slat (usec): min=9, max=7574, avg=139.31, stdev=688.01 00:09:56.906 clat (usec): min=10869, max=26148, avg=18129.56, stdev=2762.52 00:09:56.906 lat (usec): min=10906, max=26195, avg=18268.87, stdev=2843.21 00:09:56.906 clat percentiles (usec): 00:09:56.906 | 1.00th=[11863], 5.00th=[13304], 10.00th=[13698], 20.00th=[15008], 00:09:56.906 | 30.00th=[17695], 40.00th=[18482], 50.00th=[18482], 60.00th=[19268], 00:09:56.906 | 70.00th=[19530], 80.00th=[20317], 90.00th=[21103], 95.00th=[22152], 00:09:56.906 | 99.00th=[23987], 99.50th=[24773], 99.90th=[25560], 99.95th=[26084], 00:09:56.906 | 99.99th=[26084] 00:09:56.906 bw ( KiB/s): min=12288, max=16384, per=28.18%, avg=14336.00, stdev=2896.31, samples=2 00:09:56.906 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:56.906 lat (usec) : 750=0.01% 00:09:56.906 lat (msec) : 4=0.28%, 10=0.59%, 20=79.50%, 50=19.61% 00:09:56.906 cpu : usr=3.59%, sys=10.28%, ctx=309, majf=0, minf=13 00:09:56.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:56.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:56.907 issued rwts: total=3493,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:56.907 00:09:56.907 Run status group 0 (all jobs): 00:09:56.907 READ: bw=46.2MiB/s (48.4MB/s), 8159KiB/s-16.7MiB/s (8355kB/s-17.5MB/s), io=46.4MiB (48.6MB), run=1002-1004msec 00:09:56.907 WRITE: bw=49.7MiB/s (52.1MB/s), 9056KiB/s-18.0MiB/s (9273kB/s-18.8MB/s), io=49.9MiB (52.3MB), run=1002-1004msec 00:09:56.907 00:09:56.907 Disk stats (read/write): 00:09:56.907 nvme0n1: ios=3702/4096, merge=0/0, ticks=16503/16197, in_queue=32700, util=89.28% 00:09:56.907 nvme0n2: ios=1745/2048, merge=0/0, ticks=11975/13865, in_queue=25840, util=89.08% 00:09:56.907 nvme0n3: ios=1702/2048, merge=0/0, ticks=10922/12148, in_queue=23070, util=88.98% 00:09:56.907 nvme0n4: ios=3070/3086, merge=0/0, ticks=17300/15924, in_queue=33224, util=89.22% 00:09:56.907 10:53:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:56.907 10:53:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66445 00:09:56.907 10:53:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:56.907 10:53:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:56.907 [global] 00:09:56.907 thread=1 00:09:56.907 invalidate=1 00:09:56.907 rw=read 00:09:56.907 time_based=1 00:09:56.907 runtime=10 00:09:56.907 ioengine=libaio 00:09:56.907 direct=1 00:09:56.907 bs=4096 00:09:56.907 iodepth=1 00:09:56.907 norandommap=1 00:09:56.907 numjobs=1 00:09:56.907 00:09:56.907 [job0] 00:09:56.907 filename=/dev/nvme0n1 00:09:56.907 [job1] 00:09:56.907 filename=/dev/nvme0n2 00:09:56.907 [job2] 00:09:56.907 filename=/dev/nvme0n3 00:09:56.907 [job3] 00:09:56.907 filename=/dev/nvme0n4 00:09:56.907 Could not set queue depth (nvme0n1) 00:09:56.907 Could not set queue depth (nvme0n2) 00:09:56.907 Could not set queue depth (nvme0n3) 00:09:56.907 Could not set queue depth (nvme0n4) 00:09:56.907 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.907 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.907 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.907 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.907 fio-3.35 00:09:56.907 Starting 4 threads 00:10:00.193 10:53:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:00.193 fio: pid=66493, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:00.193 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37810176, buflen=4096 00:10:00.193 10:53:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:00.452 fio: pid=66492, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:00.452 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45936640, buflen=4096 00:10:00.452 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.452 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:00.711 fio: pid=66489, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:00.711 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46190592, buflen=4096 00:10:00.711 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.711 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:00.972 fio: pid=66490, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:00.972 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3620864, buflen=4096 00:10:00.972 00:10:00.972 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66489: Fri Nov 15 10:53:47 2024 00:10:00.972 read: IOPS=3256, BW=12.7MiB/s (13.3MB/s)(44.1MiB/3463msec) 00:10:00.972 slat (usec): min=7, max=11566, avg=18.05, stdev=181.02 00:10:00.972 clat (usec): min=3, max=1799, avg=287.68, stdev=61.25 00:10:00.972 lat (usec): min=142, max=11792, avg=305.73, stdev=189.80 00:10:00.972 clat percentiles (usec): 00:10:00.972 | 1.00th=[ 157], 5.00th=[ 180], 10.00th=[ 206], 20.00th=[ 255], 00:10:00.972 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:10:00.972 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 363], 00:10:00.972 | 99.00th=[ 408], 99.50th=[ 474], 99.90th=[ 783], 99.95th=[ 1074], 00:10:00.972 | 99.99th=[ 1582] 00:10:00.972 bw ( KiB/s): min=12400, max=12752, per=24.01%, avg=12560.00, stdev=146.90, samples=6 00:10:00.972 iops : min= 3100, max= 3188, avg=3140.00, stdev=36.73, samples=6 00:10:00.972 lat (usec) : 4=0.01%, 250=17.65%, 500=81.94%, 750=0.27%, 1000=0.05% 00:10:00.972 lat (msec) : 2=0.06% 00:10:00.972 cpu : usr=1.04%, sys=4.36%, ctx=11285, majf=0, minf=1 00:10:00.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.972 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.972 issued rwts: total=11278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.972 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66490: Fri Nov 15 10:53:47 2024 00:10:00.972 read: IOPS=4609, BW=18.0MiB/s (18.9MB/s)(67.5MiB/3746msec) 00:10:00.972 slat (usec): min=11, max=15507, avg=18.06, stdev=215.84 00:10:00.972 clat (usec): min=3, max=2629, avg=197.67, stdev=45.57 00:10:00.972 lat (usec): min=128, max=15704, avg=215.74, stdev=220.87 00:10:00.972 clat percentiles (usec): 00:10:00.972 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 174], 00:10:00.972 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 200], 00:10:00.972 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 237], 95.00th=[ 251], 00:10:00.972 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 445], 99.95th=[ 676], 00:10:00.972 | 99.99th=[ 2442] 00:10:00.972 bw ( KiB/s): min=16832, max=19096, per=35.29%, avg=18459.43, stdev=790.31, samples=7 00:10:00.972 iops : min= 4208, max= 4774, avg=4614.86, stdev=197.58, samples=7 00:10:00.972 lat (usec) : 4=0.01%, 50=0.01%, 250=94.51%, 500=5.39%, 750=0.03% 00:10:00.972 lat (usec) : 1000=0.01% 00:10:00.972 lat (msec) : 2=0.02%, 4=0.01% 00:10:00.972 cpu : usr=1.23%, sys=5.50%, ctx=17293, majf=0, minf=2 00:10:00.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.972 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.972 issued rwts: total=17269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.972 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66492: Fri Nov 15 10:53:47 2024 00:10:00.972 read: IOPS=3495, BW=13.7MiB/s (14.3MB/s)(43.8MiB/3209msec) 00:10:00.972 slat (usec): min=11, max=8259, avg=17.19, stdev=106.72 00:10:00.972 clat (usec): min=195, max=3578, avg=267.39, stdev=58.75 00:10:00.972 lat (usec): min=209, max=8512, avg=284.58, stdev=122.12 00:10:00.972 clat percentiles (usec): 00:10:00.972 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 245], 00:10:00.972 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:10:00.972 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 314], 00:10:00.972 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 619], 99.95th=[ 816], 00:10:00.972 | 99.99th=[ 3130] 00:10:00.972 bw ( KiB/s): min=13704, max=14216, per=26.76%, avg=13997.33, stdev=192.71, samples=6 00:10:00.972 iops : min= 3426, max= 3554, avg=3499.33, stdev=48.18, samples=6 00:10:00.972 lat (usec) : 250=27.50%, 500=72.34%, 750=0.09%, 1000=0.02% 00:10:00.972 lat (msec) : 2=0.01%, 4=0.04% 00:10:00.972 cpu : usr=0.69%, sys=4.89%, ctx=11220, majf=0, minf=1 00:10:00.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.972 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.972 issued rwts: total=11216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.972 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66493: Fri Nov 15 10:53:47 2024 00:10:00.972 read: IOPS=3133, BW=12.2MiB/s (12.8MB/s)(36.1MiB/2946msec) 00:10:00.972 slat (nsec): min=7215, max=50450, avg=11831.04, stdev=4946.66 00:10:00.972 clat (usec): min=215, max=1524, avg=306.27, stdev=38.94 00:10:00.972 lat (usec): min=230, max=1533, avg=318.10, stdev=39.11 00:10:00.972 clat percentiles (usec): 00:10:00.972 | 1.00th=[ 241], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 277], 00:10:00.972 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 314], 00:10:00.972 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 371], 00:10:00.972 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 529], 99.95th=[ 545], 00:10:00.972 | 99.99th=[ 1532] 00:10:00.972 bw ( KiB/s): min=12408, max=12728, per=23.93%, avg=12521.60, stdev=120.85, samples=5 00:10:00.972 iops : min= 3102, max= 3182, avg=3130.40, stdev=30.21, samples=5 00:10:00.972 lat (usec) : 250=3.03%, 500=96.77%, 750=0.17% 00:10:00.972 lat (msec) : 2=0.01% 00:10:00.972 cpu : usr=0.78%, sys=3.43%, ctx=9232, majf=0, minf=2 00:10:00.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.972 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.972 issued rwts: total=9232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.972 00:10:00.972 Run status group 0 (all jobs): 00:10:00.972 READ: bw=51.1MiB/s (53.6MB/s), 12.2MiB/s-18.0MiB/s (12.8MB/s-18.9MB/s), io=191MiB (201MB), run=2946-3746msec 00:10:00.972 00:10:00.972 Disk stats (read/write): 00:10:00.972 nvme0n1: ios=10826/0, merge=0/0, ticks=3092/0, in_queue=3092, util=95.31% 00:10:00.972 nvme0n2: ios=16615/0, merge=0/0, ticks=3348/0, in_queue=3348, util=95.02% 00:10:00.972 nvme0n3: ios=10883/0, merge=0/0, ticks=2960/0, in_queue=2960, util=96.36% 00:10:00.972 nvme0n4: ios=8973/0, merge=0/0, ticks=2596/0, in_queue=2596, util=96.76% 00:10:00.972 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.972 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:01.231 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.231 10:53:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:01.490 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.490 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:01.749 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:01.749 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:02.007 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:02.007 10:53:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:02.264 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:02.264 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66445 00:10:02.264 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:02.264 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.265 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.265 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:02.265 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.265 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:02.265 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:02.265 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.265 nvmf hotplug test: fio failed as expected 00:10:02.265 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:02.265 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:02.265 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:02.265 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.522 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:02.522 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:02.522 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:02.522 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:02.522 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:02.522 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.522 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:02.522 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.522 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:02.522 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.522 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.522 rmmod nvme_tcp 00:10:02.522 rmmod nvme_fabrics 00:10:02.780 rmmod nvme_keyring 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66070 ']' 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66070 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66070 ']' 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66070 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66070 00:10:02.780 killing process with pid 66070 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66070' 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66070 00:10:02.780 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66070 00:10:03.038 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.038 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.038 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.038 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:03.039 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:03.297 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.297 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.297 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:03.297 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.297 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.297 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.297 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:03.297 00:10:03.297 real 0m19.492s 00:10:03.297 user 1m12.690s 00:10:03.297 sys 0m9.317s 00:10:03.297 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.297 10:53:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.297 ************************************ 00:10:03.297 END TEST nvmf_fio_target 00:10:03.297 ************************************ 00:10:03.297 10:53:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:03.297 10:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.297 10:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.298 10:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.298 ************************************ 00:10:03.298 START TEST nvmf_bdevio 00:10:03.298 ************************************ 00:10:03.298 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:03.298 * Looking for test storage... 00:10:03.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:03.298 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:03.298 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:03.298 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.557 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:03.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.558 --rc genhtml_branch_coverage=1 00:10:03.558 --rc genhtml_function_coverage=1 00:10:03.558 --rc genhtml_legend=1 00:10:03.558 --rc geninfo_all_blocks=1 00:10:03.558 --rc geninfo_unexecuted_blocks=1 00:10:03.558 00:10:03.558 ' 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:03.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.558 --rc genhtml_branch_coverage=1 00:10:03.558 --rc genhtml_function_coverage=1 00:10:03.558 --rc genhtml_legend=1 00:10:03.558 --rc geninfo_all_blocks=1 00:10:03.558 --rc geninfo_unexecuted_blocks=1 00:10:03.558 00:10:03.558 ' 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:03.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.558 --rc genhtml_branch_coverage=1 00:10:03.558 --rc genhtml_function_coverage=1 00:10:03.558 --rc genhtml_legend=1 00:10:03.558 --rc geninfo_all_blocks=1 00:10:03.558 --rc geninfo_unexecuted_blocks=1 00:10:03.558 00:10:03.558 ' 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:03.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.558 --rc genhtml_branch_coverage=1 00:10:03.558 --rc genhtml_function_coverage=1 00:10:03.558 --rc genhtml_legend=1 00:10:03.558 --rc geninfo_all_blocks=1 00:10:03.558 --rc geninfo_unexecuted_blocks=1 00:10:03.558 00:10:03.558 ' 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.558 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:03.558 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:03.559 Cannot find device "nvmf_init_br" 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:03.559 Cannot find device "nvmf_init_br2" 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:03.559 Cannot find device "nvmf_tgt_br" 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.559 Cannot find device "nvmf_tgt_br2" 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:03.559 Cannot find device "nvmf_init_br" 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:03.559 Cannot find device "nvmf_init_br2" 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:03.559 Cannot find device "nvmf_tgt_br" 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:03.559 Cannot find device "nvmf_tgt_br2" 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:03.559 Cannot find device "nvmf_br" 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:03.559 Cannot find device "nvmf_init_if" 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:03.559 Cannot find device "nvmf_init_if2" 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:03.559 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:03.819 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:03.819 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:10:03.819 00:10:03.819 --- 10.0.0.3 ping statistics --- 00:10:03.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.819 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:03.819 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:03.819 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:10:03.819 00:10:03.819 --- 10.0.0.4 ping statistics --- 00:10:03.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.819 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:03.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:10:03.819 00:10:03.819 --- 10.0.0.1 ping statistics --- 00:10:03.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.819 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:03.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:03.819 00:10:03.819 --- 10.0.0.2 ping statistics --- 00:10:03.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.819 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66817 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66817 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66817 ']' 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.819 10:53:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.079 [2024-11-15 10:53:50.729109] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:10:04.079 [2024-11-15 10:53:50.729222] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.079 [2024-11-15 10:53:50.883279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.338 [2024-11-15 10:53:50.942300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.338 [2024-11-15 10:53:50.942411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.338 [2024-11-15 10:53:50.942439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.338 [2024-11-15 10:53:50.942451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.338 [2024-11-15 10:53:50.942461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.338 [2024-11-15 10:53:50.944333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:04.338 [2024-11-15 10:53:50.944489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:04.338 [2024-11-15 10:53:50.944608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:04.338 [2024-11-15 10:53:50.945117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.338 [2024-11-15 10:53:51.004490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:04.906 [2024-11-15 10:53:51.735158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.906 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.165 Malloc0 00:10:05.165 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.165 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.166 [2024-11-15 10:53:51.803196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:05.166 { 00:10:05.166 "params": { 00:10:05.166 "name": "Nvme$subsystem", 00:10:05.166 "trtype": "$TEST_TRANSPORT", 00:10:05.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.166 "adrfam": "ipv4", 00:10:05.166 "trsvcid": "$NVMF_PORT", 00:10:05.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.166 "hdgst": ${hdgst:-false}, 00:10:05.166 "ddgst": ${ddgst:-false} 00:10:05.166 }, 00:10:05.166 "method": "bdev_nvme_attach_controller" 00:10:05.166 } 00:10:05.166 EOF 00:10:05.166 )") 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:05.166 10:53:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:05.166 "params": { 00:10:05.166 "name": "Nvme1", 00:10:05.166 "trtype": "tcp", 00:10:05.166 "traddr": "10.0.0.3", 00:10:05.166 "adrfam": "ipv4", 00:10:05.166 "trsvcid": "4420", 00:10:05.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.166 "hdgst": false, 00:10:05.166 "ddgst": false 00:10:05.166 }, 00:10:05.166 "method": "bdev_nvme_attach_controller" 00:10:05.166 }' 00:10:05.166 [2024-11-15 10:53:51.867497] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:10:05.166 [2024-11-15 10:53:51.867615] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66853 ] 00:10:05.166 [2024-11-15 10:53:52.019956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:05.425 [2024-11-15 10:53:52.088467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.425 [2024-11-15 10:53:52.088615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.425 [2024-11-15 10:53:52.088628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.425 [2024-11-15 10:53:52.175185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.684 I/O targets: 00:10:05.684 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:05.684 00:10:05.684 00:10:05.684 CUnit - A unit testing framework for C - Version 2.1-3 00:10:05.684 http://cunit.sourceforge.net/ 00:10:05.684 00:10:05.684 00:10:05.684 Suite: bdevio tests on: Nvme1n1 00:10:05.684 Test: blockdev write read block ...passed 00:10:05.684 Test: blockdev write zeroes read block ...passed 00:10:05.684 Test: blockdev write zeroes read no split ...passed 00:10:05.684 Test: blockdev write zeroes read split ...passed 00:10:05.684 Test: blockdev write zeroes read split partial ...passed 00:10:05.684 Test: blockdev reset ...[2024-11-15 10:53:52.339302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:05.684 [2024-11-15 10:53:52.339453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x501180 (9): Bad file descriptor 00:10:05.684 passed 00:10:05.684 Test: blockdev write read 8 blocks ...[2024-11-15 10:53:52.353653] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:05.684 passed 00:10:05.684 Test: blockdev write read size > 128k ...passed 00:10:05.684 Test: blockdev write read invalid size ...passed 00:10:05.684 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:05.684 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:05.684 Test: blockdev write read max offset ...passed 00:10:05.684 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:05.684 Test: blockdev writev readv 8 blocks ...passed 00:10:05.684 Test: blockdev writev readv 30 x 1block ...passed 00:10:05.684 Test: blockdev writev readv block ...passed 00:10:05.684 Test: blockdev writev readv size > 128k ...passed 00:10:05.684 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:05.684 Test: blockdev comparev and writev ...[2024-11-15 10:53:52.364832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.684 [2024-11-15 10:53:52.364880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:05.684 [2024-11-15 10:53:52.364901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.684 [2024-11-15 10:53:52.364912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:05.684 passed 00:10:05.684 Test: blockdev nvme passthru rw ...passed 00:10:05.684 Test: blockdev nvme passthru vendor specific ...passed 00:10:05.684 Test: blockdev nvme admin passthru ...[2024-11-15 10:53:52.365227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.684 [2024-11-15 10:53:52.365250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:05.684 [2024-11-15 10:53:52.365268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.684 [2024-11-15 10:53:52.365278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:05.684 [2024-11-15 10:53:52.365598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.685 [2024-11-15 10:53:52.365615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:05.685 [2024-11-15 10:53:52.365632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.685 [2024-11-15 10:53:52.365643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:05.685 [2024-11-15 10:53:52.365927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.685 [2024-11-15 10:53:52.365943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:05.685 [2024-11-15 10:53:52.365960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:05.685 [2024-11-15 10:53:52.365970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:05.685 [2024-11-15 10:53:52.366761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.685 [2024-11-15 10:53:52.366781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:05.685 [2024-11-15 10:53:52.366899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.685 [2024-11-15 10:53:52.366915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:05.685 [2024-11-15 10:53:52.367019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.685 [2024-11-15 10:53:52.367035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:05.685 [2024-11-15 10:53:52.367145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:05.685 [2024-11-15 10:53:52.367161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:05.685 passed 00:10:05.685 Test: blockdev copy ...passed 00:10:05.685 00:10:05.685 Run Summary: Type Total Ran Passed Failed Inactive 00:10:05.685 suites 1 1 n/a 0 0 00:10:05.685 tests 23 23 23 0 0 00:10:05.685 asserts 152 152 152 0 n/a 00:10:05.685 00:10:05.685 Elapsed time = 0.150 seconds 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.959 rmmod nvme_tcp 00:10:05.959 rmmod nvme_fabrics 00:10:05.959 rmmod nvme_keyring 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66817 ']' 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66817 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66817 ']' 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66817 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66817 00:10:05.959 killing process with pid 66817 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66817' 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66817 00:10:05.959 10:53:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66817 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:06.539 00:10:06.539 real 0m3.307s 00:10:06.539 user 0m10.106s 00:10:06.539 sys 0m0.955s 00:10:06.539 ************************************ 00:10:06.539 END TEST nvmf_bdevio 00:10:06.539 ************************************ 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.539 10:53:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:06.800 00:10:06.800 real 2m36.350s 00:10:06.800 user 6m51.001s 00:10:06.800 sys 0m53.157s 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.800 ************************************ 00:10:06.800 END TEST nvmf_target_core 00:10:06.800 ************************************ 00:10:06.800 10:53:53 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:06.800 10:53:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:06.800 10:53:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.800 10:53:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:06.800 ************************************ 00:10:06.800 START TEST nvmf_target_extra 00:10:06.800 ************************************ 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:06.800 * Looking for test storage... 00:10:06.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.800 --rc genhtml_branch_coverage=1 00:10:06.800 --rc genhtml_function_coverage=1 00:10:06.800 --rc genhtml_legend=1 00:10:06.800 --rc geninfo_all_blocks=1 00:10:06.800 --rc geninfo_unexecuted_blocks=1 00:10:06.800 00:10:06.800 ' 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.800 --rc genhtml_branch_coverage=1 00:10:06.800 --rc genhtml_function_coverage=1 00:10:06.800 --rc genhtml_legend=1 00:10:06.800 --rc geninfo_all_blocks=1 00:10:06.800 --rc geninfo_unexecuted_blocks=1 00:10:06.800 00:10:06.800 ' 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.800 --rc genhtml_branch_coverage=1 00:10:06.800 --rc genhtml_function_coverage=1 00:10:06.800 --rc genhtml_legend=1 00:10:06.800 --rc geninfo_all_blocks=1 00:10:06.800 --rc geninfo_unexecuted_blocks=1 00:10:06.800 00:10:06.800 ' 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.800 --rc genhtml_branch_coverage=1 00:10:06.800 --rc genhtml_function_coverage=1 00:10:06.800 --rc genhtml_legend=1 00:10:06.800 --rc geninfo_all_blocks=1 00:10:06.800 --rc geninfo_unexecuted_blocks=1 00:10:06.800 00:10:06.800 ' 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:06.800 10:53:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.061 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:07.061 ************************************ 00:10:07.061 START TEST nvmf_auth_target 00:10:07.061 ************************************ 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:07.061 * Looking for test storage... 00:10:07.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:07.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.061 --rc genhtml_branch_coverage=1 00:10:07.061 --rc genhtml_function_coverage=1 00:10:07.061 --rc genhtml_legend=1 00:10:07.061 --rc geninfo_all_blocks=1 00:10:07.061 --rc geninfo_unexecuted_blocks=1 00:10:07.061 00:10:07.061 ' 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:07.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.061 --rc genhtml_branch_coverage=1 00:10:07.061 --rc genhtml_function_coverage=1 00:10:07.061 --rc genhtml_legend=1 00:10:07.061 --rc geninfo_all_blocks=1 00:10:07.061 --rc geninfo_unexecuted_blocks=1 00:10:07.061 00:10:07.061 ' 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:07.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.061 --rc genhtml_branch_coverage=1 00:10:07.061 --rc genhtml_function_coverage=1 00:10:07.061 --rc genhtml_legend=1 00:10:07.061 --rc geninfo_all_blocks=1 00:10:07.061 --rc geninfo_unexecuted_blocks=1 00:10:07.061 00:10:07.061 ' 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:07.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.061 --rc genhtml_branch_coverage=1 00:10:07.061 --rc genhtml_function_coverage=1 00:10:07.061 --rc genhtml_legend=1 00:10:07.061 --rc geninfo_all_blocks=1 00:10:07.061 --rc geninfo_unexecuted_blocks=1 00:10:07.061 00:10:07.061 ' 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:07.061 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.062 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.062 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:07.322 Cannot find device "nvmf_init_br" 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:07.322 Cannot find device "nvmf_init_br2" 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:07.322 Cannot find device "nvmf_tgt_br" 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:07.322 Cannot find device "nvmf_tgt_br2" 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:07.322 Cannot find device "nvmf_init_br" 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:07.322 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:07.322 Cannot find device "nvmf_init_br2" 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:07.322 Cannot find device "nvmf_tgt_br" 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:07.322 Cannot find device "nvmf_tgt_br2" 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:07.322 Cannot find device "nvmf_br" 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:07.322 Cannot find device "nvmf_init_if" 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:07.322 Cannot find device "nvmf_init_if2" 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:07.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:07.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:07.322 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:07.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:07.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:10:07.582 00:10:07.582 --- 10.0.0.3 ping statistics --- 00:10:07.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.582 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:07.582 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:07.582 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.101 ms 00:10:07.582 00:10:07.582 --- 10.0.0.4 ping statistics --- 00:10:07.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.582 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:07.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:07.582 00:10:07.582 --- 10.0.0.1 ping statistics --- 00:10:07.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.582 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:07.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:10:07.582 00:10:07.582 --- 10.0.0.2 ping statistics --- 00:10:07.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.582 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67142 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67142 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67142 ']' 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.582 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67166 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=84408f220e8aaafc3f50862b9a13250e6e6ae53f51c28108 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DZc 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 84408f220e8aaafc3f50862b9a13250e6e6ae53f51c28108 0 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 84408f220e8aaafc3f50862b9a13250e6e6ae53f51c28108 0 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=84408f220e8aaafc3f50862b9a13250e6e6ae53f51c28108 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DZc 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DZc 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.DZc 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4d52e76f4af10f98cf02985210570cc32e97cb65e12be253f836b1a520859d4a 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.I3Q 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4d52e76f4af10f98cf02985210570cc32e97cb65e12be253f836b1a520859d4a 3 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4d52e76f4af10f98cf02985210570cc32e97cb65e12be253f836b1a520859d4a 3 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4d52e76f4af10f98cf02985210570cc32e97cb65e12be253f836b1a520859d4a 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:08.151 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.I3Q 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.I3Q 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.I3Q 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=637a31d1bd8b83df67530b38384c495d 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ovN 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 637a31d1bd8b83df67530b38384c495d 1 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 637a31d1bd8b83df67530b38384c495d 1 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=637a31d1bd8b83df67530b38384c495d 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ovN 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ovN 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.ovN 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=45aa5af9fcf4d7e7676c43cac8c1f01b2b3d090641abae1c 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ENN 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 45aa5af9fcf4d7e7676c43cac8c1f01b2b3d090641abae1c 2 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 45aa5af9fcf4d7e7676c43cac8c1f01b2b3d090641abae1c 2 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=45aa5af9fcf4d7e7676c43cac8c1f01b2b3d090641abae1c 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ENN 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ENN 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.ENN 00:10:08.412 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ef63e8c8b5a24f28b61cbea63abf09de0171c9a5f910c3cd 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xdp 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ef63e8c8b5a24f28b61cbea63abf09de0171c9a5f910c3cd 2 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ef63e8c8b5a24f28b61cbea63abf09de0171c9a5f910c3cd 2 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ef63e8c8b5a24f28b61cbea63abf09de0171c9a5f910c3cd 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xdp 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xdp 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.xdp 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d28b5400ad66d623bb9ee47a916a1afb 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Q6x 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d28b5400ad66d623bb9ee47a916a1afb 1 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d28b5400ad66d623bb9ee47a916a1afb 1 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d28b5400ad66d623bb9ee47a916a1afb 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:08.413 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Q6x 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Q6x 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Q6x 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=749831bdcc4f93093906ba2d9a980f9b8667faa67919b5bf784f48ec89bf7f91 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bO6 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 749831bdcc4f93093906ba2d9a980f9b8667faa67919b5bf784f48ec89bf7f91 3 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 749831bdcc4f93093906ba2d9a980f9b8667faa67919b5bf784f48ec89bf7f91 3 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=749831bdcc4f93093906ba2d9a980f9b8667faa67919b5bf784f48ec89bf7f91 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bO6 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bO6 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.bO6 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67142 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67142 ']' 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.673 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.933 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.933 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:08.933 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67166 /var/tmp/host.sock 00:10:08.933 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67166 ']' 00:10:08.933 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:10:08.933 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:08.933 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:08.933 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.933 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DZc 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DZc 00:10:09.192 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DZc 00:10:09.452 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.I3Q ]] 00:10:09.452 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I3Q 00:10:09.452 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.452 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.711 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.711 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I3Q 00:10:09.711 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I3Q 00:10:09.711 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:09.711 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ovN 00:10:09.711 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.712 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.712 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.712 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ovN 00:10:09.712 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ovN 00:10:09.970 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.ENN ]] 00:10:09.970 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ENN 00:10:09.970 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.971 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.971 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.971 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ENN 00:10:09.971 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ENN 00:10:10.539 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:10.539 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xdp 00:10:10.539 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.539 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.539 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.539 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xdp 00:10:10.539 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xdp 00:10:10.798 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Q6x ]] 00:10:10.798 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q6x 00:10:10.798 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.798 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.798 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.798 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q6x 00:10:10.798 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q6x 00:10:11.057 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:11.057 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bO6 00:10:11.057 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.057 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.057 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.057 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.bO6 00:10:11.057 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.bO6 00:10:11.315 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:11.315 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:11.315 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:11.315 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.315 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:11.316 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.574 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.832 00:10:11.832 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:11.832 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.832 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.091 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.091 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.091 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.091 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.091 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.091 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:12.091 { 00:10:12.091 "cntlid": 1, 00:10:12.091 "qid": 0, 00:10:12.091 "state": "enabled", 00:10:12.091 "thread": "nvmf_tgt_poll_group_000", 00:10:12.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:12.091 "listen_address": { 00:10:12.091 "trtype": "TCP", 00:10:12.091 "adrfam": "IPv4", 00:10:12.091 "traddr": "10.0.0.3", 00:10:12.091 "trsvcid": "4420" 00:10:12.091 }, 00:10:12.091 "peer_address": { 00:10:12.091 "trtype": "TCP", 00:10:12.091 "adrfam": "IPv4", 00:10:12.091 "traddr": "10.0.0.1", 00:10:12.091 "trsvcid": "34116" 00:10:12.091 }, 00:10:12.091 "auth": { 00:10:12.091 "state": "completed", 00:10:12.091 "digest": "sha256", 00:10:12.091 "dhgroup": "null" 00:10:12.091 } 00:10:12.091 } 00:10:12.091 ]' 00:10:12.091 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.091 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.091 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.350 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:12.350 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.350 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.350 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.350 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.610 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:12.610 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:16.803 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.062 00:10:17.062 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.062 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.062 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.322 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.322 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.322 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.322 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.322 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.322 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:17.322 { 00:10:17.322 "cntlid": 3, 00:10:17.322 "qid": 0, 00:10:17.322 "state": "enabled", 00:10:17.322 "thread": "nvmf_tgt_poll_group_000", 00:10:17.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:17.322 "listen_address": { 00:10:17.322 "trtype": "TCP", 00:10:17.322 "adrfam": "IPv4", 00:10:17.322 "traddr": "10.0.0.3", 00:10:17.322 "trsvcid": "4420" 00:10:17.322 }, 00:10:17.322 "peer_address": { 00:10:17.322 "trtype": "TCP", 00:10:17.322 "adrfam": "IPv4", 00:10:17.322 "traddr": "10.0.0.1", 00:10:17.322 "trsvcid": "34144" 00:10:17.322 }, 00:10:17.322 "auth": { 00:10:17.322 "state": "completed", 00:10:17.322 "digest": "sha256", 00:10:17.322 "dhgroup": "null" 00:10:17.322 } 00:10:17.322 } 00:10:17.322 ]' 00:10:17.322 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:17.581 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:17.581 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:17.581 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:17.581 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:17.581 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.581 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.581 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.840 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:10:17.840 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:10:18.406 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.406 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:18.406 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.406 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.406 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.406 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:18.406 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:18.406 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:18.665 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:18.665 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:18.665 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:18.665 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:18.665 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:18.665 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.665 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:18.665 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.666 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.666 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.666 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:18.666 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:18.666 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:19.234 00:10:19.234 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.234 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.234 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.234 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.234 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.234 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.234 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.494 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.494 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:19.494 { 00:10:19.494 "cntlid": 5, 00:10:19.494 "qid": 0, 00:10:19.494 "state": "enabled", 00:10:19.494 "thread": "nvmf_tgt_poll_group_000", 00:10:19.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:19.494 "listen_address": { 00:10:19.494 "trtype": "TCP", 00:10:19.494 "adrfam": "IPv4", 00:10:19.494 "traddr": "10.0.0.3", 00:10:19.494 "trsvcid": "4420" 00:10:19.494 }, 00:10:19.494 "peer_address": { 00:10:19.494 "trtype": "TCP", 00:10:19.494 "adrfam": "IPv4", 00:10:19.494 "traddr": "10.0.0.1", 00:10:19.494 "trsvcid": "34174" 00:10:19.494 }, 00:10:19.494 "auth": { 00:10:19.494 "state": "completed", 00:10:19.494 "digest": "sha256", 00:10:19.494 "dhgroup": "null" 00:10:19.494 } 00:10:19.494 } 00:10:19.494 ]' 00:10:19.494 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:19.494 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:19.494 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:19.494 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:19.494 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:19.494 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.494 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.494 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.754 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:10:19.754 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:10:20.321 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.321 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:20.321 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.321 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.321 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.321 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:20.321 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:20.321 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:20.582 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:20.842 00:10:20.842 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:20.842 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.842 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.101 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.101 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.101 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.101 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.101 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.101 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:21.101 { 00:10:21.101 "cntlid": 7, 00:10:21.101 "qid": 0, 00:10:21.101 "state": "enabled", 00:10:21.101 "thread": "nvmf_tgt_poll_group_000", 00:10:21.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:21.101 "listen_address": { 00:10:21.101 "trtype": "TCP", 00:10:21.101 "adrfam": "IPv4", 00:10:21.101 "traddr": "10.0.0.3", 00:10:21.101 "trsvcid": "4420" 00:10:21.101 }, 00:10:21.101 "peer_address": { 00:10:21.101 "trtype": "TCP", 00:10:21.101 "adrfam": "IPv4", 00:10:21.101 "traddr": "10.0.0.1", 00:10:21.101 "trsvcid": "34202" 00:10:21.101 }, 00:10:21.101 "auth": { 00:10:21.101 "state": "completed", 00:10:21.101 "digest": "sha256", 00:10:21.101 "dhgroup": "null" 00:10:21.101 } 00:10:21.101 } 00:10:21.101 ]' 00:10:21.101 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:21.361 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:21.361 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:21.361 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:21.361 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:21.361 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.361 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.361 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.620 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:10:21.620 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:10:22.188 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.188 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:22.188 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.188 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.188 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.188 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:22.188 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:22.188 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:22.188 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:22.756 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.015 00:10:23.015 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:23.015 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.015 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:23.274 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.274 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.274 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.274 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.274 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.274 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:23.274 { 00:10:23.274 "cntlid": 9, 00:10:23.274 "qid": 0, 00:10:23.274 "state": "enabled", 00:10:23.274 "thread": "nvmf_tgt_poll_group_000", 00:10:23.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:23.274 "listen_address": { 00:10:23.274 "trtype": "TCP", 00:10:23.274 "adrfam": "IPv4", 00:10:23.274 "traddr": "10.0.0.3", 00:10:23.274 "trsvcid": "4420" 00:10:23.274 }, 00:10:23.274 "peer_address": { 00:10:23.274 "trtype": "TCP", 00:10:23.274 "adrfam": "IPv4", 00:10:23.274 "traddr": "10.0.0.1", 00:10:23.274 "trsvcid": "38334" 00:10:23.274 }, 00:10:23.274 "auth": { 00:10:23.274 "state": "completed", 00:10:23.274 "digest": "sha256", 00:10:23.274 "dhgroup": "ffdhe2048" 00:10:23.274 } 00:10:23.274 } 00:10:23.274 ]' 00:10:23.275 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.275 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:23.275 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:23.275 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:23.275 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:23.275 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.275 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.275 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.533 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:23.533 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:24.469 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.469 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:24.469 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.469 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.469 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.469 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.469 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:24.469 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.728 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:24.987 00:10:24.987 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.987 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.987 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.246 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.247 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.247 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.247 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.247 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.247 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.247 { 00:10:25.247 "cntlid": 11, 00:10:25.247 "qid": 0, 00:10:25.247 "state": "enabled", 00:10:25.247 "thread": "nvmf_tgt_poll_group_000", 00:10:25.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:25.247 "listen_address": { 00:10:25.247 "trtype": "TCP", 00:10:25.247 "adrfam": "IPv4", 00:10:25.247 "traddr": "10.0.0.3", 00:10:25.247 "trsvcid": "4420" 00:10:25.247 }, 00:10:25.247 "peer_address": { 00:10:25.247 "trtype": "TCP", 00:10:25.247 "adrfam": "IPv4", 00:10:25.247 "traddr": "10.0.0.1", 00:10:25.247 "trsvcid": "38360" 00:10:25.247 }, 00:10:25.247 "auth": { 00:10:25.247 "state": "completed", 00:10:25.247 "digest": "sha256", 00:10:25.247 "dhgroup": "ffdhe2048" 00:10:25.247 } 00:10:25.247 } 00:10:25.247 ]' 00:10:25.247 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.247 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.247 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.506 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:25.506 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.506 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.506 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.506 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.765 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:10:25.765 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:10:26.333 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.333 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:26.333 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.333 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.333 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.333 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.333 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:26.333 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.592 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.159 00:10:27.159 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:27.159 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.159 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.418 { 00:10:27.418 "cntlid": 13, 00:10:27.418 "qid": 0, 00:10:27.418 "state": "enabled", 00:10:27.418 "thread": "nvmf_tgt_poll_group_000", 00:10:27.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:27.418 "listen_address": { 00:10:27.418 "trtype": "TCP", 00:10:27.418 "adrfam": "IPv4", 00:10:27.418 "traddr": "10.0.0.3", 00:10:27.418 "trsvcid": "4420" 00:10:27.418 }, 00:10:27.418 "peer_address": { 00:10:27.418 "trtype": "TCP", 00:10:27.418 "adrfam": "IPv4", 00:10:27.418 "traddr": "10.0.0.1", 00:10:27.418 "trsvcid": "38382" 00:10:27.418 }, 00:10:27.418 "auth": { 00:10:27.418 "state": "completed", 00:10:27.418 "digest": "sha256", 00:10:27.418 "dhgroup": "ffdhe2048" 00:10:27.418 } 00:10:27.418 } 00:10:27.418 ]' 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.418 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.677 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:10:27.677 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:10:28.613 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.613 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:28.613 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.613 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.613 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.613 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.613 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:28.613 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:28.872 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:29.130 00:10:29.130 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.130 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.130 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.389 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.389 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.389 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.389 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.389 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.389 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.389 { 00:10:29.389 "cntlid": 15, 00:10:29.389 "qid": 0, 00:10:29.389 "state": "enabled", 00:10:29.389 "thread": "nvmf_tgt_poll_group_000", 00:10:29.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:29.389 "listen_address": { 00:10:29.389 "trtype": "TCP", 00:10:29.389 "adrfam": "IPv4", 00:10:29.389 "traddr": "10.0.0.3", 00:10:29.389 "trsvcid": "4420" 00:10:29.389 }, 00:10:29.389 "peer_address": { 00:10:29.389 "trtype": "TCP", 00:10:29.389 "adrfam": "IPv4", 00:10:29.389 "traddr": "10.0.0.1", 00:10:29.389 "trsvcid": "38402" 00:10:29.389 }, 00:10:29.389 "auth": { 00:10:29.389 "state": "completed", 00:10:29.389 "digest": "sha256", 00:10:29.389 "dhgroup": "ffdhe2048" 00:10:29.389 } 00:10:29.389 } 00:10:29.389 ]' 00:10:29.390 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.390 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.390 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.742 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:29.742 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.742 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.742 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.742 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.742 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:10:29.742 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.696 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.262 00:10:31.262 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:31.262 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:31.262 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:31.521 { 00:10:31.521 "cntlid": 17, 00:10:31.521 "qid": 0, 00:10:31.521 "state": "enabled", 00:10:31.521 "thread": "nvmf_tgt_poll_group_000", 00:10:31.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:31.521 "listen_address": { 00:10:31.521 "trtype": "TCP", 00:10:31.521 "adrfam": "IPv4", 00:10:31.521 "traddr": "10.0.0.3", 00:10:31.521 "trsvcid": "4420" 00:10:31.521 }, 00:10:31.521 "peer_address": { 00:10:31.521 "trtype": "TCP", 00:10:31.521 "adrfam": "IPv4", 00:10:31.521 "traddr": "10.0.0.1", 00:10:31.521 "trsvcid": "38428" 00:10:31.521 }, 00:10:31.521 "auth": { 00:10:31.521 "state": "completed", 00:10:31.521 "digest": "sha256", 00:10:31.521 "dhgroup": "ffdhe3072" 00:10:31.521 } 00:10:31.521 } 00:10:31.521 ]' 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.521 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.779 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:31.779 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.714 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.972 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.972 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.972 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.972 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.230 00:10:33.230 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:33.230 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.230 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:33.489 { 00:10:33.489 "cntlid": 19, 00:10:33.489 "qid": 0, 00:10:33.489 "state": "enabled", 00:10:33.489 "thread": "nvmf_tgt_poll_group_000", 00:10:33.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:33.489 "listen_address": { 00:10:33.489 "trtype": "TCP", 00:10:33.489 "adrfam": "IPv4", 00:10:33.489 "traddr": "10.0.0.3", 00:10:33.489 "trsvcid": "4420" 00:10:33.489 }, 00:10:33.489 "peer_address": { 00:10:33.489 "trtype": "TCP", 00:10:33.489 "adrfam": "IPv4", 00:10:33.489 "traddr": "10.0.0.1", 00:10:33.489 "trsvcid": "35776" 00:10:33.489 }, 00:10:33.489 "auth": { 00:10:33.489 "state": "completed", 00:10:33.489 "digest": "sha256", 00:10:33.489 "dhgroup": "ffdhe3072" 00:10:33.489 } 00:10:33.489 } 00:10:33.489 ]' 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.489 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.748 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:10:33.748 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.683 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.248 00:10:35.248 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.248 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.248 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:35.506 { 00:10:35.506 "cntlid": 21, 00:10:35.506 "qid": 0, 00:10:35.506 "state": "enabled", 00:10:35.506 "thread": "nvmf_tgt_poll_group_000", 00:10:35.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:35.506 "listen_address": { 00:10:35.506 "trtype": "TCP", 00:10:35.506 "adrfam": "IPv4", 00:10:35.506 "traddr": "10.0.0.3", 00:10:35.506 "trsvcid": "4420" 00:10:35.506 }, 00:10:35.506 "peer_address": { 00:10:35.506 "trtype": "TCP", 00:10:35.506 "adrfam": "IPv4", 00:10:35.506 "traddr": "10.0.0.1", 00:10:35.506 "trsvcid": "35806" 00:10:35.506 }, 00:10:35.506 "auth": { 00:10:35.506 "state": "completed", 00:10:35.506 "digest": "sha256", 00:10:35.506 "dhgroup": "ffdhe3072" 00:10:35.506 } 00:10:35.506 } 00:10:35.506 ]' 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.506 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.765 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:10:35.765 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:10:36.700 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.700 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:36.700 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.700 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.700 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.700 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:36.700 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:36.700 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:36.958 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:37.217 00:10:37.217 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.217 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.217 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.475 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.475 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.475 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.475 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.475 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.475 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.475 { 00:10:37.475 "cntlid": 23, 00:10:37.475 "qid": 0, 00:10:37.475 "state": "enabled", 00:10:37.475 "thread": "nvmf_tgt_poll_group_000", 00:10:37.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:37.475 "listen_address": { 00:10:37.475 "trtype": "TCP", 00:10:37.475 "adrfam": "IPv4", 00:10:37.475 "traddr": "10.0.0.3", 00:10:37.475 "trsvcid": "4420" 00:10:37.475 }, 00:10:37.475 "peer_address": { 00:10:37.475 "trtype": "TCP", 00:10:37.475 "adrfam": "IPv4", 00:10:37.475 "traddr": "10.0.0.1", 00:10:37.475 "trsvcid": "35834" 00:10:37.475 }, 00:10:37.475 "auth": { 00:10:37.475 "state": "completed", 00:10:37.475 "digest": "sha256", 00:10:37.475 "dhgroup": "ffdhe3072" 00:10:37.475 } 00:10:37.475 } 00:10:37.475 ]' 00:10:37.475 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.734 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.734 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.734 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:37.734 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.734 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.734 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.734 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.992 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:10:37.992 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.928 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.495 00:10:39.495 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.495 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.495 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.754 { 00:10:39.754 "cntlid": 25, 00:10:39.754 "qid": 0, 00:10:39.754 "state": "enabled", 00:10:39.754 "thread": "nvmf_tgt_poll_group_000", 00:10:39.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:39.754 "listen_address": { 00:10:39.754 "trtype": "TCP", 00:10:39.754 "adrfam": "IPv4", 00:10:39.754 "traddr": "10.0.0.3", 00:10:39.754 "trsvcid": "4420" 00:10:39.754 }, 00:10:39.754 "peer_address": { 00:10:39.754 "trtype": "TCP", 00:10:39.754 "adrfam": "IPv4", 00:10:39.754 "traddr": "10.0.0.1", 00:10:39.754 "trsvcid": "35856" 00:10:39.754 }, 00:10:39.754 "auth": { 00:10:39.754 "state": "completed", 00:10:39.754 "digest": "sha256", 00:10:39.754 "dhgroup": "ffdhe4096" 00:10:39.754 } 00:10:39.754 } 00:10:39.754 ]' 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.754 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.013 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:40.013 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:40.948 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.948 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:40.948 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.948 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.948 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.948 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.948 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:40.948 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.207 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.466 00:10:41.466 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:41.466 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:41.466 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.725 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.725 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.725 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.725 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.725 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.725 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.725 { 00:10:41.725 "cntlid": 27, 00:10:41.725 "qid": 0, 00:10:41.725 "state": "enabled", 00:10:41.725 "thread": "nvmf_tgt_poll_group_000", 00:10:41.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:41.725 "listen_address": { 00:10:41.725 "trtype": "TCP", 00:10:41.725 "adrfam": "IPv4", 00:10:41.725 "traddr": "10.0.0.3", 00:10:41.725 "trsvcid": "4420" 00:10:41.725 }, 00:10:41.725 "peer_address": { 00:10:41.725 "trtype": "TCP", 00:10:41.725 "adrfam": "IPv4", 00:10:41.725 "traddr": "10.0.0.1", 00:10:41.725 "trsvcid": "51208" 00:10:41.725 }, 00:10:41.725 "auth": { 00:10:41.725 "state": "completed", 00:10:41.725 "digest": "sha256", 00:10:41.725 "dhgroup": "ffdhe4096" 00:10:41.725 } 00:10:41.725 } 00:10:41.725 ]' 00:10:41.725 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.725 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:41.983 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.983 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:41.983 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.983 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.983 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.983 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.242 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:10:42.242 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:10:42.809 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.809 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:42.809 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.809 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.809 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.809 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.809 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:42.809 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.068 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.635 00:10:43.635 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.635 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.635 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.894 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.894 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.894 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.894 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.894 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.894 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.894 { 00:10:43.894 "cntlid": 29, 00:10:43.894 "qid": 0, 00:10:43.894 "state": "enabled", 00:10:43.894 "thread": "nvmf_tgt_poll_group_000", 00:10:43.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:43.894 "listen_address": { 00:10:43.894 "trtype": "TCP", 00:10:43.894 "adrfam": "IPv4", 00:10:43.894 "traddr": "10.0.0.3", 00:10:43.894 "trsvcid": "4420" 00:10:43.894 }, 00:10:43.894 "peer_address": { 00:10:43.894 "trtype": "TCP", 00:10:43.894 "adrfam": "IPv4", 00:10:43.894 "traddr": "10.0.0.1", 00:10:43.894 "trsvcid": "51230" 00:10:43.894 }, 00:10:43.894 "auth": { 00:10:43.894 "state": "completed", 00:10:43.894 "digest": "sha256", 00:10:43.894 "dhgroup": "ffdhe4096" 00:10:43.894 } 00:10:43.894 } 00:10:43.894 ]' 00:10:43.894 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.894 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.894 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.895 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:43.895 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.895 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.895 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.895 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.153 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:10:44.153 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:10:44.719 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.720 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:44.720 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.720 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.720 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.720 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.720 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:44.720 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.979 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:45.548 00:10:45.548 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.548 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.548 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.807 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.808 { 00:10:45.808 "cntlid": 31, 00:10:45.808 "qid": 0, 00:10:45.808 "state": "enabled", 00:10:45.808 "thread": "nvmf_tgt_poll_group_000", 00:10:45.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:45.808 "listen_address": { 00:10:45.808 "trtype": "TCP", 00:10:45.808 "adrfam": "IPv4", 00:10:45.808 "traddr": "10.0.0.3", 00:10:45.808 "trsvcid": "4420" 00:10:45.808 }, 00:10:45.808 "peer_address": { 00:10:45.808 "trtype": "TCP", 00:10:45.808 "adrfam": "IPv4", 00:10:45.808 "traddr": "10.0.0.1", 00:10:45.808 "trsvcid": "51276" 00:10:45.808 }, 00:10:45.808 "auth": { 00:10:45.808 "state": "completed", 00:10:45.808 "digest": "sha256", 00:10:45.808 "dhgroup": "ffdhe4096" 00:10:45.808 } 00:10:45.808 } 00:10:45.808 ]' 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.808 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.376 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:10:46.376 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:10:46.944 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.944 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:46.944 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.944 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.944 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.944 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:46.944 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.944 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:46.944 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.203 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.771 00:10:47.771 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:47.771 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.771 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.041 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.041 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.041 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.041 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.041 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.041 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.041 { 00:10:48.041 "cntlid": 33, 00:10:48.042 "qid": 0, 00:10:48.042 "state": "enabled", 00:10:48.042 "thread": "nvmf_tgt_poll_group_000", 00:10:48.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:48.042 "listen_address": { 00:10:48.042 "trtype": "TCP", 00:10:48.042 "adrfam": "IPv4", 00:10:48.042 "traddr": "10.0.0.3", 00:10:48.042 "trsvcid": "4420" 00:10:48.042 }, 00:10:48.042 "peer_address": { 00:10:48.042 "trtype": "TCP", 00:10:48.042 "adrfam": "IPv4", 00:10:48.042 "traddr": "10.0.0.1", 00:10:48.042 "trsvcid": "51316" 00:10:48.042 }, 00:10:48.042 "auth": { 00:10:48.042 "state": "completed", 00:10:48.042 "digest": "sha256", 00:10:48.042 "dhgroup": "ffdhe6144" 00:10:48.042 } 00:10:48.042 } 00:10:48.042 ]' 00:10:48.042 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.042 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.042 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.042 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:48.042 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.042 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.042 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.042 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.307 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:48.307 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:48.877 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.877 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:48.877 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.877 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.173 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.763 00:10:49.763 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:49.763 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.763 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.021 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.021 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.021 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.022 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.022 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.022 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:50.022 { 00:10:50.022 "cntlid": 35, 00:10:50.022 "qid": 0, 00:10:50.022 "state": "enabled", 00:10:50.022 "thread": "nvmf_tgt_poll_group_000", 00:10:50.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:50.022 "listen_address": { 00:10:50.022 "trtype": "TCP", 00:10:50.022 "adrfam": "IPv4", 00:10:50.022 "traddr": "10.0.0.3", 00:10:50.022 "trsvcid": "4420" 00:10:50.022 }, 00:10:50.022 "peer_address": { 00:10:50.022 "trtype": "TCP", 00:10:50.022 "adrfam": "IPv4", 00:10:50.022 "traddr": "10.0.0.1", 00:10:50.022 "trsvcid": "51342" 00:10:50.022 }, 00:10:50.022 "auth": { 00:10:50.022 "state": "completed", 00:10:50.022 "digest": "sha256", 00:10:50.022 "dhgroup": "ffdhe6144" 00:10:50.022 } 00:10:50.022 } 00:10:50.022 ]' 00:10:50.022 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:50.022 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:50.022 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:50.022 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:50.022 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:50.280 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.280 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.280 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.538 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:10:50.538 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:10:51.106 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.106 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:51.106 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.106 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.106 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.106 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:51.106 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:51.106 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.365 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.933 00:10:51.933 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.933 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.933 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.191 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.192 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.192 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.192 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.192 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.192 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:52.192 { 00:10:52.192 "cntlid": 37, 00:10:52.192 "qid": 0, 00:10:52.192 "state": "enabled", 00:10:52.192 "thread": "nvmf_tgt_poll_group_000", 00:10:52.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:52.192 "listen_address": { 00:10:52.192 "trtype": "TCP", 00:10:52.192 "adrfam": "IPv4", 00:10:52.192 "traddr": "10.0.0.3", 00:10:52.192 "trsvcid": "4420" 00:10:52.192 }, 00:10:52.192 "peer_address": { 00:10:52.192 "trtype": "TCP", 00:10:52.192 "adrfam": "IPv4", 00:10:52.192 "traddr": "10.0.0.1", 00:10:52.192 "trsvcid": "40840" 00:10:52.192 }, 00:10:52.192 "auth": { 00:10:52.192 "state": "completed", 00:10:52.192 "digest": "sha256", 00:10:52.192 "dhgroup": "ffdhe6144" 00:10:52.192 } 00:10:52.192 } 00:10:52.192 ]' 00:10:52.192 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:52.192 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:52.192 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:52.451 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:52.451 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:52.451 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.451 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.451 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.710 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:10:52.710 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:10:53.278 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.278 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:53.278 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.278 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.278 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.278 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:53.278 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:53.278 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:53.537 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:54.104 00:10:54.104 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:54.104 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.104 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.363 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.363 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.363 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.363 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.363 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.363 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.363 { 00:10:54.363 "cntlid": 39, 00:10:54.363 "qid": 0, 00:10:54.363 "state": "enabled", 00:10:54.363 "thread": "nvmf_tgt_poll_group_000", 00:10:54.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:54.363 "listen_address": { 00:10:54.363 "trtype": "TCP", 00:10:54.363 "adrfam": "IPv4", 00:10:54.363 "traddr": "10.0.0.3", 00:10:54.363 "trsvcid": "4420" 00:10:54.363 }, 00:10:54.363 "peer_address": { 00:10:54.363 "trtype": "TCP", 00:10:54.363 "adrfam": "IPv4", 00:10:54.363 "traddr": "10.0.0.1", 00:10:54.363 "trsvcid": "40852" 00:10:54.363 }, 00:10:54.363 "auth": { 00:10:54.363 "state": "completed", 00:10:54.363 "digest": "sha256", 00:10:54.363 "dhgroup": "ffdhe6144" 00:10:54.363 } 00:10:54.363 } 00:10:54.363 ]' 00:10:54.363 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.363 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.363 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:54.363 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:54.363 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:54.622 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.622 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.622 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.881 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:10:54.881 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:10:55.451 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.451 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:55.451 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.451 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.451 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.451 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:55.451 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.451 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:55.451 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.019 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.586 00:10:56.586 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.586 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.586 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.845 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.845 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.845 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.845 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.845 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.845 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.845 { 00:10:56.845 "cntlid": 41, 00:10:56.845 "qid": 0, 00:10:56.845 "state": "enabled", 00:10:56.845 "thread": "nvmf_tgt_poll_group_000", 00:10:56.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:56.845 "listen_address": { 00:10:56.845 "trtype": "TCP", 00:10:56.845 "adrfam": "IPv4", 00:10:56.845 "traddr": "10.0.0.3", 00:10:56.845 "trsvcid": "4420" 00:10:56.845 }, 00:10:56.845 "peer_address": { 00:10:56.845 "trtype": "TCP", 00:10:56.845 "adrfam": "IPv4", 00:10:56.845 "traddr": "10.0.0.1", 00:10:56.845 "trsvcid": "40872" 00:10:56.845 }, 00:10:56.845 "auth": { 00:10:56.845 "state": "completed", 00:10:56.845 "digest": "sha256", 00:10:56.845 "dhgroup": "ffdhe8192" 00:10:56.845 } 00:10:56.845 } 00:10:56.845 ]' 00:10:56.845 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.845 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.845 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.845 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:56.845 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.103 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.103 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.103 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.362 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:57.362 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:10:57.929 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.929 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:10:57.929 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.929 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.929 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.929 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.929 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:57.929 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.188 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.756 00:10:59.015 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.015 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.015 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.274 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.274 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.274 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.274 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.274 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.274 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.274 { 00:10:59.274 "cntlid": 43, 00:10:59.274 "qid": 0, 00:10:59.274 "state": "enabled", 00:10:59.274 "thread": "nvmf_tgt_poll_group_000", 00:10:59.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:10:59.274 "listen_address": { 00:10:59.274 "trtype": "TCP", 00:10:59.274 "adrfam": "IPv4", 00:10:59.274 "traddr": "10.0.0.3", 00:10:59.274 "trsvcid": "4420" 00:10:59.274 }, 00:10:59.274 "peer_address": { 00:10:59.274 "trtype": "TCP", 00:10:59.274 "adrfam": "IPv4", 00:10:59.274 "traddr": "10.0.0.1", 00:10:59.274 "trsvcid": "40914" 00:10:59.274 }, 00:10:59.274 "auth": { 00:10:59.274 "state": "completed", 00:10:59.274 "digest": "sha256", 00:10:59.274 "dhgroup": "ffdhe8192" 00:10:59.274 } 00:10:59.274 } 00:10:59.274 ]' 00:10:59.274 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.274 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.274 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.274 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:59.274 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.274 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.274 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.274 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.532 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:10:59.533 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:00.100 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.101 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:00.101 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.101 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.101 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.101 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.101 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:00.101 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.669 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.237 00:11:01.237 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.237 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.237 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.496 { 00:11:01.496 "cntlid": 45, 00:11:01.496 "qid": 0, 00:11:01.496 "state": "enabled", 00:11:01.496 "thread": "nvmf_tgt_poll_group_000", 00:11:01.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:01.496 "listen_address": { 00:11:01.496 "trtype": "TCP", 00:11:01.496 "adrfam": "IPv4", 00:11:01.496 "traddr": "10.0.0.3", 00:11:01.496 "trsvcid": "4420" 00:11:01.496 }, 00:11:01.496 "peer_address": { 00:11:01.496 "trtype": "TCP", 00:11:01.496 "adrfam": "IPv4", 00:11:01.496 "traddr": "10.0.0.1", 00:11:01.496 "trsvcid": "40938" 00:11:01.496 }, 00:11:01.496 "auth": { 00:11:01.496 "state": "completed", 00:11:01.496 "digest": "sha256", 00:11:01.496 "dhgroup": "ffdhe8192" 00:11:01.496 } 00:11:01.496 } 00:11:01.496 ]' 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.496 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.755 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:01.755 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:02.355 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.355 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:02.355 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.355 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.355 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.355 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.355 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:02.355 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:02.620 10:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:03.187 00:11:03.187 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.187 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.187 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.446 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.446 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.446 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.446 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.704 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.705 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.705 { 00:11:03.705 "cntlid": 47, 00:11:03.705 "qid": 0, 00:11:03.705 "state": "enabled", 00:11:03.705 "thread": "nvmf_tgt_poll_group_000", 00:11:03.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:03.705 "listen_address": { 00:11:03.705 "trtype": "TCP", 00:11:03.705 "adrfam": "IPv4", 00:11:03.705 "traddr": "10.0.0.3", 00:11:03.705 "trsvcid": "4420" 00:11:03.705 }, 00:11:03.705 "peer_address": { 00:11:03.705 "trtype": "TCP", 00:11:03.705 "adrfam": "IPv4", 00:11:03.705 "traddr": "10.0.0.1", 00:11:03.705 "trsvcid": "42388" 00:11:03.705 }, 00:11:03.705 "auth": { 00:11:03.705 "state": "completed", 00:11:03.705 "digest": "sha256", 00:11:03.705 "dhgroup": "ffdhe8192" 00:11:03.705 } 00:11:03.705 } 00:11:03.705 ]' 00:11:03.705 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.705 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.705 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.705 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:03.705 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.705 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.705 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.705 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.963 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:03.963 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:04.527 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.527 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:04.527 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.527 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.785 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.785 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:04.785 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:04.785 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.785 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:04.785 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.044 10:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.301 00:11:05.301 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.301 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.302 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.560 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.560 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.560 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.560 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.560 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.560 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.560 { 00:11:05.560 "cntlid": 49, 00:11:05.560 "qid": 0, 00:11:05.560 "state": "enabled", 00:11:05.560 "thread": "nvmf_tgt_poll_group_000", 00:11:05.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:05.560 "listen_address": { 00:11:05.560 "trtype": "TCP", 00:11:05.560 "adrfam": "IPv4", 00:11:05.560 "traddr": "10.0.0.3", 00:11:05.560 "trsvcid": "4420" 00:11:05.560 }, 00:11:05.560 "peer_address": { 00:11:05.560 "trtype": "TCP", 00:11:05.560 "adrfam": "IPv4", 00:11:05.560 "traddr": "10.0.0.1", 00:11:05.560 "trsvcid": "42416" 00:11:05.560 }, 00:11:05.560 "auth": { 00:11:05.560 "state": "completed", 00:11:05.560 "digest": "sha384", 00:11:05.560 "dhgroup": "null" 00:11:05.560 } 00:11:05.560 } 00:11:05.560 ]' 00:11:05.560 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.560 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.560 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.819 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:05.819 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.819 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.819 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.819 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.078 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:06.078 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:06.647 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.647 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:06.647 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.647 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.647 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.647 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.647 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:06.647 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.906 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.165 00:11:07.165 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.165 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.165 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.424 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.424 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.424 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.424 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.683 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.683 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.683 { 00:11:07.683 "cntlid": 51, 00:11:07.683 "qid": 0, 00:11:07.683 "state": "enabled", 00:11:07.683 "thread": "nvmf_tgt_poll_group_000", 00:11:07.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:07.683 "listen_address": { 00:11:07.683 "trtype": "TCP", 00:11:07.683 "adrfam": "IPv4", 00:11:07.683 "traddr": "10.0.0.3", 00:11:07.683 "trsvcid": "4420" 00:11:07.683 }, 00:11:07.683 "peer_address": { 00:11:07.683 "trtype": "TCP", 00:11:07.683 "adrfam": "IPv4", 00:11:07.683 "traddr": "10.0.0.1", 00:11:07.683 "trsvcid": "42454" 00:11:07.683 }, 00:11:07.683 "auth": { 00:11:07.683 "state": "completed", 00:11:07.683 "digest": "sha384", 00:11:07.683 "dhgroup": "null" 00:11:07.683 } 00:11:07.683 } 00:11:07.683 ]' 00:11:07.683 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.683 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.683 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.683 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:07.683 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.683 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.683 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.683 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.942 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:07.942 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:08.509 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.509 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:08.509 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.509 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.509 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.509 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.509 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:08.509 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.768 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.335 00:11:09.336 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:09.336 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.336 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.595 { 00:11:09.595 "cntlid": 53, 00:11:09.595 "qid": 0, 00:11:09.595 "state": "enabled", 00:11:09.595 "thread": "nvmf_tgt_poll_group_000", 00:11:09.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:09.595 "listen_address": { 00:11:09.595 "trtype": "TCP", 00:11:09.595 "adrfam": "IPv4", 00:11:09.595 "traddr": "10.0.0.3", 00:11:09.595 "trsvcid": "4420" 00:11:09.595 }, 00:11:09.595 "peer_address": { 00:11:09.595 "trtype": "TCP", 00:11:09.595 "adrfam": "IPv4", 00:11:09.595 "traddr": "10.0.0.1", 00:11:09.595 "trsvcid": "42490" 00:11:09.595 }, 00:11:09.595 "auth": { 00:11:09.595 "state": "completed", 00:11:09.595 "digest": "sha384", 00:11:09.595 "dhgroup": "null" 00:11:09.595 } 00:11:09.595 } 00:11:09.595 ]' 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.595 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.854 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:09.854 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:10.789 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.789 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:10.789 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.789 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.789 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.789 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.789 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:10.789 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:10.789 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:10.789 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.790 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:10.790 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:10.790 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:10.790 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.790 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:11:10.790 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.790 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.790 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.790 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:10.790 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:10.790 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:11.048 00:11:11.307 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.307 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.307 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.307 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.307 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.307 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.307 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.307 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.307 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.307 { 00:11:11.307 "cntlid": 55, 00:11:11.307 "qid": 0, 00:11:11.307 "state": "enabled", 00:11:11.307 "thread": "nvmf_tgt_poll_group_000", 00:11:11.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:11.307 "listen_address": { 00:11:11.307 "trtype": "TCP", 00:11:11.307 "adrfam": "IPv4", 00:11:11.307 "traddr": "10.0.0.3", 00:11:11.307 "trsvcid": "4420" 00:11:11.307 }, 00:11:11.307 "peer_address": { 00:11:11.307 "trtype": "TCP", 00:11:11.307 "adrfam": "IPv4", 00:11:11.307 "traddr": "10.0.0.1", 00:11:11.307 "trsvcid": "42512" 00:11:11.307 }, 00:11:11.307 "auth": { 00:11:11.307 "state": "completed", 00:11:11.307 "digest": "sha384", 00:11:11.307 "dhgroup": "null" 00:11:11.307 } 00:11:11.307 } 00:11:11.307 ]' 00:11:11.307 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.565 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.565 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.565 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:11.565 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.565 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.565 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.565 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.823 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:11.823 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:12.477 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.477 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:12.477 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.477 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.477 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.477 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:12.477 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.477 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:12.477 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.735 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.992 00:11:12.992 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.992 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.992 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.249 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.249 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.249 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.249 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.249 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.249 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.249 { 00:11:13.249 "cntlid": 57, 00:11:13.249 "qid": 0, 00:11:13.249 "state": "enabled", 00:11:13.249 "thread": "nvmf_tgt_poll_group_000", 00:11:13.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:13.249 "listen_address": { 00:11:13.249 "trtype": "TCP", 00:11:13.249 "adrfam": "IPv4", 00:11:13.249 "traddr": "10.0.0.3", 00:11:13.249 "trsvcid": "4420" 00:11:13.249 }, 00:11:13.249 "peer_address": { 00:11:13.249 "trtype": "TCP", 00:11:13.249 "adrfam": "IPv4", 00:11:13.249 "traddr": "10.0.0.1", 00:11:13.249 "trsvcid": "55860" 00:11:13.249 }, 00:11:13.249 "auth": { 00:11:13.249 "state": "completed", 00:11:13.249 "digest": "sha384", 00:11:13.249 "dhgroup": "ffdhe2048" 00:11:13.249 } 00:11:13.249 } 00:11:13.249 ]' 00:11:13.249 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.506 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.506 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.506 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:13.506 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.506 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.506 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.507 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.764 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:13.765 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:14.332 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.332 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:14.332 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.332 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.332 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.332 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.332 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:14.332 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.591 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.850 00:11:14.850 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.850 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.850 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.109 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.109 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.109 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.109 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.109 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.109 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.109 { 00:11:15.109 "cntlid": 59, 00:11:15.109 "qid": 0, 00:11:15.109 "state": "enabled", 00:11:15.109 "thread": "nvmf_tgt_poll_group_000", 00:11:15.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:15.109 "listen_address": { 00:11:15.109 "trtype": "TCP", 00:11:15.109 "adrfam": "IPv4", 00:11:15.109 "traddr": "10.0.0.3", 00:11:15.109 "trsvcid": "4420" 00:11:15.109 }, 00:11:15.109 "peer_address": { 00:11:15.109 "trtype": "TCP", 00:11:15.109 "adrfam": "IPv4", 00:11:15.109 "traddr": "10.0.0.1", 00:11:15.109 "trsvcid": "55884" 00:11:15.109 }, 00:11:15.109 "auth": { 00:11:15.109 "state": "completed", 00:11:15.109 "digest": "sha384", 00:11:15.109 "dhgroup": "ffdhe2048" 00:11:15.109 } 00:11:15.109 } 00:11:15.109 ]' 00:11:15.109 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.368 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.368 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.368 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:15.368 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.368 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.368 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.368 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.627 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:15.627 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:16.194 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.194 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:16.194 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.194 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.195 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.195 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.195 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:16.195 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.453 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.712 00:11:16.981 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.981 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.981 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.261 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.261 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.261 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.261 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.261 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.261 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.261 { 00:11:17.261 "cntlid": 61, 00:11:17.261 "qid": 0, 00:11:17.261 "state": "enabled", 00:11:17.261 "thread": "nvmf_tgt_poll_group_000", 00:11:17.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:17.261 "listen_address": { 00:11:17.261 "trtype": "TCP", 00:11:17.261 "adrfam": "IPv4", 00:11:17.261 "traddr": "10.0.0.3", 00:11:17.261 "trsvcid": "4420" 00:11:17.261 }, 00:11:17.261 "peer_address": { 00:11:17.261 "trtype": "TCP", 00:11:17.261 "adrfam": "IPv4", 00:11:17.261 "traddr": "10.0.0.1", 00:11:17.261 "trsvcid": "55922" 00:11:17.261 }, 00:11:17.261 "auth": { 00:11:17.261 "state": "completed", 00:11:17.261 "digest": "sha384", 00:11:17.261 "dhgroup": "ffdhe2048" 00:11:17.261 } 00:11:17.261 } 00:11:17.261 ]' 00:11:17.261 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.261 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.261 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.261 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:17.261 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.520 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.520 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.520 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.520 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:17.520 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:18.087 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.087 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:18.087 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.087 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.087 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.087 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.087 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:18.087 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.655 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.914 00:11:18.914 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.914 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.914 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.914 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.914 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.914 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.914 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.173 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.173 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.173 { 00:11:19.173 "cntlid": 63, 00:11:19.173 "qid": 0, 00:11:19.173 "state": "enabled", 00:11:19.173 "thread": "nvmf_tgt_poll_group_000", 00:11:19.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:19.173 "listen_address": { 00:11:19.173 "trtype": "TCP", 00:11:19.173 "adrfam": "IPv4", 00:11:19.173 "traddr": "10.0.0.3", 00:11:19.173 "trsvcid": "4420" 00:11:19.173 }, 00:11:19.173 "peer_address": { 00:11:19.173 "trtype": "TCP", 00:11:19.173 "adrfam": "IPv4", 00:11:19.173 "traddr": "10.0.0.1", 00:11:19.173 "trsvcid": "55938" 00:11:19.173 }, 00:11:19.173 "auth": { 00:11:19.173 "state": "completed", 00:11:19.173 "digest": "sha384", 00:11:19.173 "dhgroup": "ffdhe2048" 00:11:19.173 } 00:11:19.173 } 00:11:19.173 ]' 00:11:19.173 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.173 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.173 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.173 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:19.173 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.173 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.173 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.173 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.432 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:19.432 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:20.000 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.260 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:20.260 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.260 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.260 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.260 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:20.260 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.260 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.260 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.519 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.778 00:11:20.778 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.778 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.778 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.037 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.037 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.037 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.037 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.037 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.037 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.037 { 00:11:21.037 "cntlid": 65, 00:11:21.037 "qid": 0, 00:11:21.037 "state": "enabled", 00:11:21.037 "thread": "nvmf_tgt_poll_group_000", 00:11:21.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:21.037 "listen_address": { 00:11:21.037 "trtype": "TCP", 00:11:21.037 "adrfam": "IPv4", 00:11:21.037 "traddr": "10.0.0.3", 00:11:21.037 "trsvcid": "4420" 00:11:21.037 }, 00:11:21.037 "peer_address": { 00:11:21.037 "trtype": "TCP", 00:11:21.037 "adrfam": "IPv4", 00:11:21.037 "traddr": "10.0.0.1", 00:11:21.037 "trsvcid": "55968" 00:11:21.037 }, 00:11:21.037 "auth": { 00:11:21.037 "state": "completed", 00:11:21.037 "digest": "sha384", 00:11:21.037 "dhgroup": "ffdhe3072" 00:11:21.037 } 00:11:21.037 } 00:11:21.037 ]' 00:11:21.037 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.296 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.296 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.296 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.296 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.296 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.296 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.296 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.556 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:21.556 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:22.124 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.124 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:22.124 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.124 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.383 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.383 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.383 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:22.383 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.643 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.902 00:11:22.902 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.902 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.902 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.162 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.162 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.162 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.162 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.162 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.162 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.162 { 00:11:23.162 "cntlid": 67, 00:11:23.162 "qid": 0, 00:11:23.162 "state": "enabled", 00:11:23.162 "thread": "nvmf_tgt_poll_group_000", 00:11:23.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:23.162 "listen_address": { 00:11:23.162 "trtype": "TCP", 00:11:23.162 "adrfam": "IPv4", 00:11:23.162 "traddr": "10.0.0.3", 00:11:23.162 "trsvcid": "4420" 00:11:23.162 }, 00:11:23.162 "peer_address": { 00:11:23.162 "trtype": "TCP", 00:11:23.162 "adrfam": "IPv4", 00:11:23.162 "traddr": "10.0.0.1", 00:11:23.162 "trsvcid": "54646" 00:11:23.162 }, 00:11:23.162 "auth": { 00:11:23.162 "state": "completed", 00:11:23.162 "digest": "sha384", 00:11:23.162 "dhgroup": "ffdhe3072" 00:11:23.162 } 00:11:23.162 } 00:11:23.162 ]' 00:11:23.162 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.162 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.162 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.420 10:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:23.420 10:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.420 10:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.420 10:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.420 10:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.679 10:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:23.679 10:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:24.248 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.248 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:24.248 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.248 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.248 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.248 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.248 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:24.248 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.815 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.815 00:11:25.074 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.074 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.074 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.074 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.074 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.074 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.074 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.074 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.074 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.074 { 00:11:25.074 "cntlid": 69, 00:11:25.074 "qid": 0, 00:11:25.074 "state": "enabled", 00:11:25.074 "thread": "nvmf_tgt_poll_group_000", 00:11:25.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:25.074 "listen_address": { 00:11:25.074 "trtype": "TCP", 00:11:25.074 "adrfam": "IPv4", 00:11:25.074 "traddr": "10.0.0.3", 00:11:25.074 "trsvcid": "4420" 00:11:25.074 }, 00:11:25.074 "peer_address": { 00:11:25.074 "trtype": "TCP", 00:11:25.074 "adrfam": "IPv4", 00:11:25.074 "traddr": "10.0.0.1", 00:11:25.074 "trsvcid": "54672" 00:11:25.074 }, 00:11:25.074 "auth": { 00:11:25.074 "state": "completed", 00:11:25.074 "digest": "sha384", 00:11:25.074 "dhgroup": "ffdhe3072" 00:11:25.074 } 00:11:25.074 } 00:11:25.074 ]' 00:11:25.074 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.333 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.333 10:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.333 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:25.333 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.333 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.333 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.333 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.592 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:25.592 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:26.160 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.160 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:26.160 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.160 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.160 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.160 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.160 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:26.160 10:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:26.418 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:26.419 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:26.986 00:11:26.986 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.986 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.986 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.245 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.245 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.245 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.245 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.245 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.245 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.245 { 00:11:27.245 "cntlid": 71, 00:11:27.245 "qid": 0, 00:11:27.245 "state": "enabled", 00:11:27.245 "thread": "nvmf_tgt_poll_group_000", 00:11:27.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:27.245 "listen_address": { 00:11:27.245 "trtype": "TCP", 00:11:27.245 "adrfam": "IPv4", 00:11:27.245 "traddr": "10.0.0.3", 00:11:27.245 "trsvcid": "4420" 00:11:27.245 }, 00:11:27.245 "peer_address": { 00:11:27.245 "trtype": "TCP", 00:11:27.245 "adrfam": "IPv4", 00:11:27.245 "traddr": "10.0.0.1", 00:11:27.245 "trsvcid": "54694" 00:11:27.245 }, 00:11:27.245 "auth": { 00:11:27.245 "state": "completed", 00:11:27.245 "digest": "sha384", 00:11:27.245 "dhgroup": "ffdhe3072" 00:11:27.245 } 00:11:27.245 } 00:11:27.245 ]' 00:11:27.245 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.245 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.245 10:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.245 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:27.245 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.245 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.245 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.245 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.812 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:27.812 10:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:28.390 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.390 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:28.390 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.390 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.390 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.390 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:28.390 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.390 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:28.390 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.650 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.909 00:11:28.909 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.909 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.909 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.168 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.168 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.168 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.168 10:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.168 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.168 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.168 { 00:11:29.168 "cntlid": 73, 00:11:29.168 "qid": 0, 00:11:29.168 "state": "enabled", 00:11:29.168 "thread": "nvmf_tgt_poll_group_000", 00:11:29.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:29.168 "listen_address": { 00:11:29.168 "trtype": "TCP", 00:11:29.168 "adrfam": "IPv4", 00:11:29.168 "traddr": "10.0.0.3", 00:11:29.168 "trsvcid": "4420" 00:11:29.168 }, 00:11:29.168 "peer_address": { 00:11:29.168 "trtype": "TCP", 00:11:29.168 "adrfam": "IPv4", 00:11:29.168 "traddr": "10.0.0.1", 00:11:29.168 "trsvcid": "54720" 00:11:29.168 }, 00:11:29.168 "auth": { 00:11:29.168 "state": "completed", 00:11:29.168 "digest": "sha384", 00:11:29.168 "dhgroup": "ffdhe4096" 00:11:29.168 } 00:11:29.168 } 00:11:29.168 ]' 00:11:29.168 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.427 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.427 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.427 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:29.427 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.427 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.427 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.427 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.685 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:29.685 10:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:30.328 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.328 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:30.328 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.328 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.328 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.328 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.328 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.328 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.587 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.846 00:11:30.846 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.846 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.846 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.105 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.105 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.105 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.105 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.105 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.105 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.105 { 00:11:31.105 "cntlid": 75, 00:11:31.105 "qid": 0, 00:11:31.105 "state": "enabled", 00:11:31.105 "thread": "nvmf_tgt_poll_group_000", 00:11:31.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:31.105 "listen_address": { 00:11:31.105 "trtype": "TCP", 00:11:31.105 "adrfam": "IPv4", 00:11:31.105 "traddr": "10.0.0.3", 00:11:31.105 "trsvcid": "4420" 00:11:31.105 }, 00:11:31.105 "peer_address": { 00:11:31.105 "trtype": "TCP", 00:11:31.105 "adrfam": "IPv4", 00:11:31.105 "traddr": "10.0.0.1", 00:11:31.105 "trsvcid": "54756" 00:11:31.105 }, 00:11:31.105 "auth": { 00:11:31.105 "state": "completed", 00:11:31.105 "digest": "sha384", 00:11:31.105 "dhgroup": "ffdhe4096" 00:11:31.105 } 00:11:31.105 } 00:11:31.105 ]' 00:11:31.105 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.364 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.364 10:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.364 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:31.364 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.364 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.364 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.364 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.623 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:31.623 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:32.192 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.192 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:32.192 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.192 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.192 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.192 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.192 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:32.192 10:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.455 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.713 00:11:32.713 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.713 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.713 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.973 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.973 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.973 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.973 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.973 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.973 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.973 { 00:11:32.973 "cntlid": 77, 00:11:32.973 "qid": 0, 00:11:32.973 "state": "enabled", 00:11:32.973 "thread": "nvmf_tgt_poll_group_000", 00:11:32.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:32.973 "listen_address": { 00:11:32.973 "trtype": "TCP", 00:11:32.973 "adrfam": "IPv4", 00:11:32.973 "traddr": "10.0.0.3", 00:11:32.973 "trsvcid": "4420" 00:11:32.973 }, 00:11:32.973 "peer_address": { 00:11:32.973 "trtype": "TCP", 00:11:32.973 "adrfam": "IPv4", 00:11:32.973 "traddr": "10.0.0.1", 00:11:32.973 "trsvcid": "39852" 00:11:32.973 }, 00:11:32.973 "auth": { 00:11:32.973 "state": "completed", 00:11:32.973 "digest": "sha384", 00:11:32.973 "dhgroup": "ffdhe4096" 00:11:32.973 } 00:11:32.973 } 00:11:32.973 ]' 00:11:32.973 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.973 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.973 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.231 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:33.231 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.231 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.231 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.231 10:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.490 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:33.491 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:34.056 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.056 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:34.056 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.056 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.056 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.056 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.056 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:34.056 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:34.314 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:34.572 00:11:34.572 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.572 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.572 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.140 { 00:11:35.140 "cntlid": 79, 00:11:35.140 "qid": 0, 00:11:35.140 "state": "enabled", 00:11:35.140 "thread": "nvmf_tgt_poll_group_000", 00:11:35.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:35.140 "listen_address": { 00:11:35.140 "trtype": "TCP", 00:11:35.140 "adrfam": "IPv4", 00:11:35.140 "traddr": "10.0.0.3", 00:11:35.140 "trsvcid": "4420" 00:11:35.140 }, 00:11:35.140 "peer_address": { 00:11:35.140 "trtype": "TCP", 00:11:35.140 "adrfam": "IPv4", 00:11:35.140 "traddr": "10.0.0.1", 00:11:35.140 "trsvcid": "39874" 00:11:35.140 }, 00:11:35.140 "auth": { 00:11:35.140 "state": "completed", 00:11:35.140 "digest": "sha384", 00:11:35.140 "dhgroup": "ffdhe4096" 00:11:35.140 } 00:11:35.140 } 00:11:35.140 ]' 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.140 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.399 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:35.399 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:35.965 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.965 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:35.966 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.966 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.966 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.966 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:35.966 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.966 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:35.966 10:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.225 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.792 00:11:36.792 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.792 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.792 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.049 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.049 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.049 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.049 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.049 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.049 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.049 { 00:11:37.049 "cntlid": 81, 00:11:37.049 "qid": 0, 00:11:37.049 "state": "enabled", 00:11:37.049 "thread": "nvmf_tgt_poll_group_000", 00:11:37.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:37.049 "listen_address": { 00:11:37.049 "trtype": "TCP", 00:11:37.049 "adrfam": "IPv4", 00:11:37.049 "traddr": "10.0.0.3", 00:11:37.049 "trsvcid": "4420" 00:11:37.049 }, 00:11:37.049 "peer_address": { 00:11:37.049 "trtype": "TCP", 00:11:37.049 "adrfam": "IPv4", 00:11:37.049 "traddr": "10.0.0.1", 00:11:37.049 "trsvcid": "39892" 00:11:37.049 }, 00:11:37.049 "auth": { 00:11:37.049 "state": "completed", 00:11:37.049 "digest": "sha384", 00:11:37.049 "dhgroup": "ffdhe6144" 00:11:37.049 } 00:11:37.049 } 00:11:37.049 ]' 00:11:37.049 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.049 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:37.049 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.307 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:37.307 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.307 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.307 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.307 10:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.566 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:37.566 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:38.134 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.134 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:38.134 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.134 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.134 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.134 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.134 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:38.134 10:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.392 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.961 00:11:38.961 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.961 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.961 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.219 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.219 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.219 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.219 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.219 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.219 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.219 { 00:11:39.219 "cntlid": 83, 00:11:39.219 "qid": 0, 00:11:39.219 "state": "enabled", 00:11:39.219 "thread": "nvmf_tgt_poll_group_000", 00:11:39.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:39.219 "listen_address": { 00:11:39.219 "trtype": "TCP", 00:11:39.219 "adrfam": "IPv4", 00:11:39.219 "traddr": "10.0.0.3", 00:11:39.219 "trsvcid": "4420" 00:11:39.219 }, 00:11:39.219 "peer_address": { 00:11:39.219 "trtype": "TCP", 00:11:39.219 "adrfam": "IPv4", 00:11:39.219 "traddr": "10.0.0.1", 00:11:39.219 "trsvcid": "39916" 00:11:39.219 }, 00:11:39.219 "auth": { 00:11:39.219 "state": "completed", 00:11:39.219 "digest": "sha384", 00:11:39.219 "dhgroup": "ffdhe6144" 00:11:39.219 } 00:11:39.219 } 00:11:39.219 ]' 00:11:39.219 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.219 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.219 10:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.219 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:39.219 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.478 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.478 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.478 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.478 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:39.478 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:40.413 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.413 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:40.413 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.413 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.413 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.413 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.413 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:40.413 10:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.672 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.931 00:11:40.932 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.932 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.932 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.191 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.191 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.191 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.191 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.191 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.191 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.191 { 00:11:41.191 "cntlid": 85, 00:11:41.191 "qid": 0, 00:11:41.191 "state": "enabled", 00:11:41.191 "thread": "nvmf_tgt_poll_group_000", 00:11:41.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:41.191 "listen_address": { 00:11:41.191 "trtype": "TCP", 00:11:41.191 "adrfam": "IPv4", 00:11:41.191 "traddr": "10.0.0.3", 00:11:41.191 "trsvcid": "4420" 00:11:41.191 }, 00:11:41.191 "peer_address": { 00:11:41.191 "trtype": "TCP", 00:11:41.191 "adrfam": "IPv4", 00:11:41.191 "traddr": "10.0.0.1", 00:11:41.191 "trsvcid": "39938" 00:11:41.191 }, 00:11:41.191 "auth": { 00:11:41.191 "state": "completed", 00:11:41.191 "digest": "sha384", 00:11:41.191 "dhgroup": "ffdhe6144" 00:11:41.191 } 00:11:41.191 } 00:11:41.191 ]' 00:11:41.191 10:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.191 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.191 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.449 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:41.450 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.450 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.450 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.450 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.708 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:41.709 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:42.277 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.277 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:42.277 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.277 10:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.277 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.277 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:42.277 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:42.537 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.104 00:11:43.104 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.104 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.104 10:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.378 { 00:11:43.378 "cntlid": 87, 00:11:43.378 "qid": 0, 00:11:43.378 "state": "enabled", 00:11:43.378 "thread": "nvmf_tgt_poll_group_000", 00:11:43.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:43.378 "listen_address": { 00:11:43.378 "trtype": "TCP", 00:11:43.378 "adrfam": "IPv4", 00:11:43.378 "traddr": "10.0.0.3", 00:11:43.378 "trsvcid": "4420" 00:11:43.378 }, 00:11:43.378 "peer_address": { 00:11:43.378 "trtype": "TCP", 00:11:43.378 "adrfam": "IPv4", 00:11:43.378 "traddr": "10.0.0.1", 00:11:43.378 "trsvcid": "49362" 00:11:43.378 }, 00:11:43.378 "auth": { 00:11:43.378 "state": "completed", 00:11:43.378 "digest": "sha384", 00:11:43.378 "dhgroup": "ffdhe6144" 00:11:43.378 } 00:11:43.378 } 00:11:43.378 ]' 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.378 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.651 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:43.651 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:44.218 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.219 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:44.219 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.219 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.219 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.219 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.219 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.219 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:44.219 10:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.478 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.046 00:11:45.046 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.046 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.046 10:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.614 { 00:11:45.614 "cntlid": 89, 00:11:45.614 "qid": 0, 00:11:45.614 "state": "enabled", 00:11:45.614 "thread": "nvmf_tgt_poll_group_000", 00:11:45.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:45.614 "listen_address": { 00:11:45.614 "trtype": "TCP", 00:11:45.614 "adrfam": "IPv4", 00:11:45.614 "traddr": "10.0.0.3", 00:11:45.614 "trsvcid": "4420" 00:11:45.614 }, 00:11:45.614 "peer_address": { 00:11:45.614 "trtype": "TCP", 00:11:45.614 "adrfam": "IPv4", 00:11:45.614 "traddr": "10.0.0.1", 00:11:45.614 "trsvcid": "49392" 00:11:45.614 }, 00:11:45.614 "auth": { 00:11:45.614 "state": "completed", 00:11:45.614 "digest": "sha384", 00:11:45.614 "dhgroup": "ffdhe8192" 00:11:45.614 } 00:11:45.614 } 00:11:45.614 ]' 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.614 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.873 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:45.873 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:46.440 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.440 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:46.440 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.440 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.440 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.440 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.440 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:46.440 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.698 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.265 00:11:47.265 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.265 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.265 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.832 { 00:11:47.832 "cntlid": 91, 00:11:47.832 "qid": 0, 00:11:47.832 "state": "enabled", 00:11:47.832 "thread": "nvmf_tgt_poll_group_000", 00:11:47.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:47.832 "listen_address": { 00:11:47.832 "trtype": "TCP", 00:11:47.832 "adrfam": "IPv4", 00:11:47.832 "traddr": "10.0.0.3", 00:11:47.832 "trsvcid": "4420" 00:11:47.832 }, 00:11:47.832 "peer_address": { 00:11:47.832 "trtype": "TCP", 00:11:47.832 "adrfam": "IPv4", 00:11:47.832 "traddr": "10.0.0.1", 00:11:47.832 "trsvcid": "49426" 00:11:47.832 }, 00:11:47.832 "auth": { 00:11:47.832 "state": "completed", 00:11:47.832 "digest": "sha384", 00:11:47.832 "dhgroup": "ffdhe8192" 00:11:47.832 } 00:11:47.832 } 00:11:47.832 ]' 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.832 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.090 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:48.090 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:48.656 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.656 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:48.656 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.656 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.656 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.656 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.656 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:48.656 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.914 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.915 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.482 00:11:49.482 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.482 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.482 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.741 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.741 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.741 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.741 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.741 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.741 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.741 { 00:11:49.741 "cntlid": 93, 00:11:49.741 "qid": 0, 00:11:49.741 "state": "enabled", 00:11:49.741 "thread": "nvmf_tgt_poll_group_000", 00:11:49.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:49.741 "listen_address": { 00:11:49.741 "trtype": "TCP", 00:11:49.741 "adrfam": "IPv4", 00:11:49.741 "traddr": "10.0.0.3", 00:11:49.741 "trsvcid": "4420" 00:11:49.741 }, 00:11:49.741 "peer_address": { 00:11:49.741 "trtype": "TCP", 00:11:49.741 "adrfam": "IPv4", 00:11:49.741 "traddr": "10.0.0.1", 00:11:49.741 "trsvcid": "49452" 00:11:49.741 }, 00:11:49.741 "auth": { 00:11:49.741 "state": "completed", 00:11:49.741 "digest": "sha384", 00:11:49.741 "dhgroup": "ffdhe8192" 00:11:49.741 } 00:11:49.741 } 00:11:49.741 ]' 00:11:49.741 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.741 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.741 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.001 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:50.001 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.001 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.001 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.001 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.259 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:50.259 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:50.827 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.827 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:50.827 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.827 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.827 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.827 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.827 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:50.827 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:51.087 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.024 00:11:52.024 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.024 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.024 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.024 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.024 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.024 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.024 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.024 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.024 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.024 { 00:11:52.024 "cntlid": 95, 00:11:52.024 "qid": 0, 00:11:52.024 "state": "enabled", 00:11:52.024 "thread": "nvmf_tgt_poll_group_000", 00:11:52.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:52.024 "listen_address": { 00:11:52.024 "trtype": "TCP", 00:11:52.024 "adrfam": "IPv4", 00:11:52.024 "traddr": "10.0.0.3", 00:11:52.024 "trsvcid": "4420" 00:11:52.024 }, 00:11:52.024 "peer_address": { 00:11:52.024 "trtype": "TCP", 00:11:52.024 "adrfam": "IPv4", 00:11:52.024 "traddr": "10.0.0.1", 00:11:52.024 "trsvcid": "37024" 00:11:52.024 }, 00:11:52.024 "auth": { 00:11:52.024 "state": "completed", 00:11:52.024 "digest": "sha384", 00:11:52.024 "dhgroup": "ffdhe8192" 00:11:52.024 } 00:11:52.024 } 00:11:52.024 ]' 00:11:52.024 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.283 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:52.283 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.283 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:52.283 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.283 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.283 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.283 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.541 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:52.541 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:53.109 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.109 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:53.109 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.109 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.109 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.109 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:53.109 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:53.109 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.109 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:53.109 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.367 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.368 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.626 00:11:53.626 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.626 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.626 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.884 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.885 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.885 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.885 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.885 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.885 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.885 { 00:11:53.885 "cntlid": 97, 00:11:53.885 "qid": 0, 00:11:53.885 "state": "enabled", 00:11:53.885 "thread": "nvmf_tgt_poll_group_000", 00:11:53.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:53.885 "listen_address": { 00:11:53.885 "trtype": "TCP", 00:11:53.885 "adrfam": "IPv4", 00:11:53.885 "traddr": "10.0.0.3", 00:11:53.885 "trsvcid": "4420" 00:11:53.885 }, 00:11:53.885 "peer_address": { 00:11:53.885 "trtype": "TCP", 00:11:53.885 "adrfam": "IPv4", 00:11:53.885 "traddr": "10.0.0.1", 00:11:53.885 "trsvcid": "37038" 00:11:53.885 }, 00:11:53.885 "auth": { 00:11:53.885 "state": "completed", 00:11:53.885 "digest": "sha512", 00:11:53.885 "dhgroup": "null" 00:11:53.885 } 00:11:53.885 } 00:11:53.885 ]' 00:11:53.885 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.144 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:54.144 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.144 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:54.144 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.144 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.144 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.144 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.403 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:54.403 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:11:54.971 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.971 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:54.971 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.971 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.971 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.971 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.971 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:54.971 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.231 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.489 00:11:55.490 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.490 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.490 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.749 { 00:11:55.749 "cntlid": 99, 00:11:55.749 "qid": 0, 00:11:55.749 "state": "enabled", 00:11:55.749 "thread": "nvmf_tgt_poll_group_000", 00:11:55.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:55.749 "listen_address": { 00:11:55.749 "trtype": "TCP", 00:11:55.749 "adrfam": "IPv4", 00:11:55.749 "traddr": "10.0.0.3", 00:11:55.749 "trsvcid": "4420" 00:11:55.749 }, 00:11:55.749 "peer_address": { 00:11:55.749 "trtype": "TCP", 00:11:55.749 "adrfam": "IPv4", 00:11:55.749 "traddr": "10.0.0.1", 00:11:55.749 "trsvcid": "37060" 00:11:55.749 }, 00:11:55.749 "auth": { 00:11:55.749 "state": "completed", 00:11:55.749 "digest": "sha512", 00:11:55.749 "dhgroup": "null" 00:11:55.749 } 00:11:55.749 } 00:11:55.749 ]' 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.749 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.008 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:56.008 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.971 10:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.235 00:11:57.235 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.235 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.235 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.803 { 00:11:57.803 "cntlid": 101, 00:11:57.803 "qid": 0, 00:11:57.803 "state": "enabled", 00:11:57.803 "thread": "nvmf_tgt_poll_group_000", 00:11:57.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:57.803 "listen_address": { 00:11:57.803 "trtype": "TCP", 00:11:57.803 "adrfam": "IPv4", 00:11:57.803 "traddr": "10.0.0.3", 00:11:57.803 "trsvcid": "4420" 00:11:57.803 }, 00:11:57.803 "peer_address": { 00:11:57.803 "trtype": "TCP", 00:11:57.803 "adrfam": "IPv4", 00:11:57.803 "traddr": "10.0.0.1", 00:11:57.803 "trsvcid": "37080" 00:11:57.803 }, 00:11:57.803 "auth": { 00:11:57.803 "state": "completed", 00:11:57.803 "digest": "sha512", 00:11:57.803 "dhgroup": "null" 00:11:57.803 } 00:11:57.803 } 00:11:57.803 ]' 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.803 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.062 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:58.062 10:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:11:58.630 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.630 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:11:58.630 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.630 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.630 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.630 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.630 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:58.630 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.890 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.149 00:11:59.149 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.149 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.149 10:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.408 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.408 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.408 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.408 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.408 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.408 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.408 { 00:11:59.408 "cntlid": 103, 00:11:59.408 "qid": 0, 00:11:59.408 "state": "enabled", 00:11:59.408 "thread": "nvmf_tgt_poll_group_000", 00:11:59.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:11:59.408 "listen_address": { 00:11:59.408 "trtype": "TCP", 00:11:59.408 "adrfam": "IPv4", 00:11:59.408 "traddr": "10.0.0.3", 00:11:59.408 "trsvcid": "4420" 00:11:59.408 }, 00:11:59.408 "peer_address": { 00:11:59.408 "trtype": "TCP", 00:11:59.408 "adrfam": "IPv4", 00:11:59.408 "traddr": "10.0.0.1", 00:11:59.408 "trsvcid": "37100" 00:11:59.408 }, 00:11:59.408 "auth": { 00:11:59.408 "state": "completed", 00:11:59.408 "digest": "sha512", 00:11:59.408 "dhgroup": "null" 00:11:59.408 } 00:11:59.408 } 00:11:59.408 ]' 00:11:59.408 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.408 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.408 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.666 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:59.666 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.666 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.666 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.666 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.925 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:11:59.925 10:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:00.493 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.493 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:00.493 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.493 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.493 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.493 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.493 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.493 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:00.493 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.752 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.011 00:12:01.011 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.011 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.011 10:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.580 { 00:12:01.580 "cntlid": 105, 00:12:01.580 "qid": 0, 00:12:01.580 "state": "enabled", 00:12:01.580 "thread": "nvmf_tgt_poll_group_000", 00:12:01.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:01.580 "listen_address": { 00:12:01.580 "trtype": "TCP", 00:12:01.580 "adrfam": "IPv4", 00:12:01.580 "traddr": "10.0.0.3", 00:12:01.580 "trsvcid": "4420" 00:12:01.580 }, 00:12:01.580 "peer_address": { 00:12:01.580 "trtype": "TCP", 00:12:01.580 "adrfam": "IPv4", 00:12:01.580 "traddr": "10.0.0.1", 00:12:01.580 "trsvcid": "37132" 00:12:01.580 }, 00:12:01.580 "auth": { 00:12:01.580 "state": "completed", 00:12:01.580 "digest": "sha512", 00:12:01.580 "dhgroup": "ffdhe2048" 00:12:01.580 } 00:12:01.580 } 00:12:01.580 ]' 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.580 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.839 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:01.839 10:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:02.407 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.407 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:02.407 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.407 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.407 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.407 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.407 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:02.407 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:02.666 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:02.666 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.666 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:02.666 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:02.666 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:02.666 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.666 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.666 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.666 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.666 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.666 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.667 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.667 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.926 00:12:02.926 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.926 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.926 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.185 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.185 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.185 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.185 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.185 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.185 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.185 { 00:12:03.185 "cntlid": 107, 00:12:03.185 "qid": 0, 00:12:03.185 "state": "enabled", 00:12:03.185 "thread": "nvmf_tgt_poll_group_000", 00:12:03.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:03.185 "listen_address": { 00:12:03.185 "trtype": "TCP", 00:12:03.185 "adrfam": "IPv4", 00:12:03.185 "traddr": "10.0.0.3", 00:12:03.185 "trsvcid": "4420" 00:12:03.185 }, 00:12:03.185 "peer_address": { 00:12:03.185 "trtype": "TCP", 00:12:03.185 "adrfam": "IPv4", 00:12:03.185 "traddr": "10.0.0.1", 00:12:03.185 "trsvcid": "58876" 00:12:03.185 }, 00:12:03.185 "auth": { 00:12:03.185 "state": "completed", 00:12:03.185 "digest": "sha512", 00:12:03.185 "dhgroup": "ffdhe2048" 00:12:03.185 } 00:12:03.185 } 00:12:03.185 ]' 00:12:03.185 10:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.185 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.185 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.444 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:03.445 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.445 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.445 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.445 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.704 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:12:03.704 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:12:04.271 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.271 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:04.271 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.271 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.271 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.271 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.271 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:04.271 10:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.530 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.789 00:12:04.789 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.789 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.789 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.047 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.047 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.048 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.048 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.048 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.048 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.048 { 00:12:05.048 "cntlid": 109, 00:12:05.048 "qid": 0, 00:12:05.048 "state": "enabled", 00:12:05.048 "thread": "nvmf_tgt_poll_group_000", 00:12:05.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:05.048 "listen_address": { 00:12:05.048 "trtype": "TCP", 00:12:05.048 "adrfam": "IPv4", 00:12:05.048 "traddr": "10.0.0.3", 00:12:05.048 "trsvcid": "4420" 00:12:05.048 }, 00:12:05.048 "peer_address": { 00:12:05.048 "trtype": "TCP", 00:12:05.048 "adrfam": "IPv4", 00:12:05.048 "traddr": "10.0.0.1", 00:12:05.048 "trsvcid": "58906" 00:12:05.048 }, 00:12:05.048 "auth": { 00:12:05.048 "state": "completed", 00:12:05.048 "digest": "sha512", 00:12:05.048 "dhgroup": "ffdhe2048" 00:12:05.048 } 00:12:05.048 } 00:12:05.048 ]' 00:12:05.048 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.048 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.048 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.306 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:05.306 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.306 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.306 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.306 10:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.566 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:12:05.566 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:12:06.133 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.133 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:06.133 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.133 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.133 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.133 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.133 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:06.133 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:06.393 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:06.393 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.393 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:06.393 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:06.393 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:06.393 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.393 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:12:06.393 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.393 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.652 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.652 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:06.652 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:06.652 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:06.910 00:12:06.910 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.910 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.910 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.169 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.169 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.169 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.169 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.169 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.169 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.169 { 00:12:07.169 "cntlid": 111, 00:12:07.169 "qid": 0, 00:12:07.169 "state": "enabled", 00:12:07.169 "thread": "nvmf_tgt_poll_group_000", 00:12:07.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:07.169 "listen_address": { 00:12:07.169 "trtype": "TCP", 00:12:07.169 "adrfam": "IPv4", 00:12:07.169 "traddr": "10.0.0.3", 00:12:07.169 "trsvcid": "4420" 00:12:07.169 }, 00:12:07.169 "peer_address": { 00:12:07.169 "trtype": "TCP", 00:12:07.169 "adrfam": "IPv4", 00:12:07.169 "traddr": "10.0.0.1", 00:12:07.169 "trsvcid": "58932" 00:12:07.169 }, 00:12:07.169 "auth": { 00:12:07.169 "state": "completed", 00:12:07.169 "digest": "sha512", 00:12:07.169 "dhgroup": "ffdhe2048" 00:12:07.169 } 00:12:07.169 } 00:12:07.169 ]' 00:12:07.169 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.169 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.169 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.169 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:07.169 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.169 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.169 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.169 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.737 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:07.737 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:08.305 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.305 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:08.305 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.305 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.305 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.305 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:08.305 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.305 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:08.305 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.564 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.823 00:12:08.823 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.823 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.823 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.082 { 00:12:09.082 "cntlid": 113, 00:12:09.082 "qid": 0, 00:12:09.082 "state": "enabled", 00:12:09.082 "thread": "nvmf_tgt_poll_group_000", 00:12:09.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:09.082 "listen_address": { 00:12:09.082 "trtype": "TCP", 00:12:09.082 "adrfam": "IPv4", 00:12:09.082 "traddr": "10.0.0.3", 00:12:09.082 "trsvcid": "4420" 00:12:09.082 }, 00:12:09.082 "peer_address": { 00:12:09.082 "trtype": "TCP", 00:12:09.082 "adrfam": "IPv4", 00:12:09.082 "traddr": "10.0.0.1", 00:12:09.082 "trsvcid": "58952" 00:12:09.082 }, 00:12:09.082 "auth": { 00:12:09.082 "state": "completed", 00:12:09.082 "digest": "sha512", 00:12:09.082 "dhgroup": "ffdhe3072" 00:12:09.082 } 00:12:09.082 } 00:12:09.082 ]' 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.082 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.341 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:09.341 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:10.327 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.327 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:10.327 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.328 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.328 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.328 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.328 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:10.328 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.328 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.599 00:12:10.599 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.599 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.599 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.858 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.858 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.858 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.858 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.858 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.858 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.858 { 00:12:10.858 "cntlid": 115, 00:12:10.858 "qid": 0, 00:12:10.858 "state": "enabled", 00:12:10.858 "thread": "nvmf_tgt_poll_group_000", 00:12:10.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:10.858 "listen_address": { 00:12:10.858 "trtype": "TCP", 00:12:10.858 "adrfam": "IPv4", 00:12:10.858 "traddr": "10.0.0.3", 00:12:10.858 "trsvcid": "4420" 00:12:10.858 }, 00:12:10.858 "peer_address": { 00:12:10.858 "trtype": "TCP", 00:12:10.858 "adrfam": "IPv4", 00:12:10.858 "traddr": "10.0.0.1", 00:12:10.858 "trsvcid": "58966" 00:12:10.858 }, 00:12:10.858 "auth": { 00:12:10.858 "state": "completed", 00:12:10.858 "digest": "sha512", 00:12:10.858 "dhgroup": "ffdhe3072" 00:12:10.858 } 00:12:10.858 } 00:12:10.858 ]' 00:12:10.858 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.117 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.117 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.117 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:11.117 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.117 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.117 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.117 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.375 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:12:11.375 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:12:11.943 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.943 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:11.943 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.943 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.943 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.943 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.943 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:11.943 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.202 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.461 00:12:12.461 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.461 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.461 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.720 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.720 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.720 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.720 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.720 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.720 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.720 { 00:12:12.720 "cntlid": 117, 00:12:12.720 "qid": 0, 00:12:12.720 "state": "enabled", 00:12:12.720 "thread": "nvmf_tgt_poll_group_000", 00:12:12.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:12.720 "listen_address": { 00:12:12.720 "trtype": "TCP", 00:12:12.720 "adrfam": "IPv4", 00:12:12.720 "traddr": "10.0.0.3", 00:12:12.720 "trsvcid": "4420" 00:12:12.720 }, 00:12:12.720 "peer_address": { 00:12:12.720 "trtype": "TCP", 00:12:12.720 "adrfam": "IPv4", 00:12:12.720 "traddr": "10.0.0.1", 00:12:12.720 "trsvcid": "52704" 00:12:12.720 }, 00:12:12.720 "auth": { 00:12:12.720 "state": "completed", 00:12:12.720 "digest": "sha512", 00:12:12.720 "dhgroup": "ffdhe3072" 00:12:12.720 } 00:12:12.720 } 00:12:12.720 ]' 00:12:12.720 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.979 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.979 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.979 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:12.979 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.979 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.979 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.979 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.238 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:12:13.238 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:12:13.806 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.806 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:13.806 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.806 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.806 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.806 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.806 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:13.806 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.064 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.632 00:12:14.632 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.632 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.632 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.632 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.632 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.632 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.632 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.632 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.632 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.632 { 00:12:14.632 "cntlid": 119, 00:12:14.632 "qid": 0, 00:12:14.632 "state": "enabled", 00:12:14.632 "thread": "nvmf_tgt_poll_group_000", 00:12:14.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:14.632 "listen_address": { 00:12:14.632 "trtype": "TCP", 00:12:14.632 "adrfam": "IPv4", 00:12:14.632 "traddr": "10.0.0.3", 00:12:14.632 "trsvcid": "4420" 00:12:14.632 }, 00:12:14.632 "peer_address": { 00:12:14.632 "trtype": "TCP", 00:12:14.632 "adrfam": "IPv4", 00:12:14.632 "traddr": "10.0.0.1", 00:12:14.632 "trsvcid": "52712" 00:12:14.632 }, 00:12:14.632 "auth": { 00:12:14.632 "state": "completed", 00:12:14.632 "digest": "sha512", 00:12:14.632 "dhgroup": "ffdhe3072" 00:12:14.632 } 00:12:14.632 } 00:12:14.632 ]' 00:12:14.632 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.891 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.891 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.891 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:14.891 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.891 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.891 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.891 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.150 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:15.150 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:15.717 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.717 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:15.717 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.717 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.717 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.717 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:15.717 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.717 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:15.717 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.285 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.544 00:12:16.544 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.544 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.544 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.802 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.802 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.802 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.802 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.802 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.802 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.802 { 00:12:16.802 "cntlid": 121, 00:12:16.802 "qid": 0, 00:12:16.802 "state": "enabled", 00:12:16.802 "thread": "nvmf_tgt_poll_group_000", 00:12:16.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:16.803 "listen_address": { 00:12:16.803 "trtype": "TCP", 00:12:16.803 "adrfam": "IPv4", 00:12:16.803 "traddr": "10.0.0.3", 00:12:16.803 "trsvcid": "4420" 00:12:16.803 }, 00:12:16.803 "peer_address": { 00:12:16.803 "trtype": "TCP", 00:12:16.803 "adrfam": "IPv4", 00:12:16.803 "traddr": "10.0.0.1", 00:12:16.803 "trsvcid": "52748" 00:12:16.803 }, 00:12:16.803 "auth": { 00:12:16.803 "state": "completed", 00:12:16.803 "digest": "sha512", 00:12:16.803 "dhgroup": "ffdhe4096" 00:12:16.803 } 00:12:16.803 } 00:12:16.803 ]' 00:12:16.803 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.803 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.803 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.803 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.803 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.062 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.062 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.062 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.320 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:17.321 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:17.888 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.888 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:17.888 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.888 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.888 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.888 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.888 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:17.888 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.146 10:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.715 00:12:18.715 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.715 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.715 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.973 { 00:12:18.973 "cntlid": 123, 00:12:18.973 "qid": 0, 00:12:18.973 "state": "enabled", 00:12:18.973 "thread": "nvmf_tgt_poll_group_000", 00:12:18.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:18.973 "listen_address": { 00:12:18.973 "trtype": "TCP", 00:12:18.973 "adrfam": "IPv4", 00:12:18.973 "traddr": "10.0.0.3", 00:12:18.973 "trsvcid": "4420" 00:12:18.973 }, 00:12:18.973 "peer_address": { 00:12:18.973 "trtype": "TCP", 00:12:18.973 "adrfam": "IPv4", 00:12:18.973 "traddr": "10.0.0.1", 00:12:18.973 "trsvcid": "52776" 00:12:18.973 }, 00:12:18.973 "auth": { 00:12:18.973 "state": "completed", 00:12:18.973 "digest": "sha512", 00:12:18.973 "dhgroup": "ffdhe4096" 00:12:18.973 } 00:12:18.973 } 00:12:18.973 ]' 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.973 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.974 10:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.232 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:12:19.232 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.168 10:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.427 00:12:20.686 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.686 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.686 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.945 { 00:12:20.945 "cntlid": 125, 00:12:20.945 "qid": 0, 00:12:20.945 "state": "enabled", 00:12:20.945 "thread": "nvmf_tgt_poll_group_000", 00:12:20.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:20.945 "listen_address": { 00:12:20.945 "trtype": "TCP", 00:12:20.945 "adrfam": "IPv4", 00:12:20.945 "traddr": "10.0.0.3", 00:12:20.945 "trsvcid": "4420" 00:12:20.945 }, 00:12:20.945 "peer_address": { 00:12:20.945 "trtype": "TCP", 00:12:20.945 "adrfam": "IPv4", 00:12:20.945 "traddr": "10.0.0.1", 00:12:20.945 "trsvcid": "52802" 00:12:20.945 }, 00:12:20.945 "auth": { 00:12:20.945 "state": "completed", 00:12:20.945 "digest": "sha512", 00:12:20.945 "dhgroup": "ffdhe4096" 00:12:20.945 } 00:12:20.945 } 00:12:20.945 ]' 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.945 10:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.204 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:12:21.204 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.142 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:22.143 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:22.143 10:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:22.710 00:12:22.710 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.710 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.710 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.968 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.969 { 00:12:22.969 "cntlid": 127, 00:12:22.969 "qid": 0, 00:12:22.969 "state": "enabled", 00:12:22.969 "thread": "nvmf_tgt_poll_group_000", 00:12:22.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:22.969 "listen_address": { 00:12:22.969 "trtype": "TCP", 00:12:22.969 "adrfam": "IPv4", 00:12:22.969 "traddr": "10.0.0.3", 00:12:22.969 "trsvcid": "4420" 00:12:22.969 }, 00:12:22.969 "peer_address": { 00:12:22.969 "trtype": "TCP", 00:12:22.969 "adrfam": "IPv4", 00:12:22.969 "traddr": "10.0.0.1", 00:12:22.969 "trsvcid": "56222" 00:12:22.969 }, 00:12:22.969 "auth": { 00:12:22.969 "state": "completed", 00:12:22.969 "digest": "sha512", 00:12:22.969 "dhgroup": "ffdhe4096" 00:12:22.969 } 00:12:22.969 } 00:12:22.969 ]' 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.969 10:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.228 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:23.228 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:23.795 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.795 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:23.795 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.795 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.795 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.795 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:23.795 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.795 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:23.795 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:24.053 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:24.053 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.053 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:24.053 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:24.053 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:24.053 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.053 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.053 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.053 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.311 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.311 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.311 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.311 10:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.569 00:12:24.569 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.569 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.569 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.826 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.826 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.826 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.826 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.826 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.826 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.826 { 00:12:24.826 "cntlid": 129, 00:12:24.826 "qid": 0, 00:12:24.826 "state": "enabled", 00:12:24.826 "thread": "nvmf_tgt_poll_group_000", 00:12:24.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:24.826 "listen_address": { 00:12:24.826 "trtype": "TCP", 00:12:24.826 "adrfam": "IPv4", 00:12:24.826 "traddr": "10.0.0.3", 00:12:24.826 "trsvcid": "4420" 00:12:24.826 }, 00:12:24.826 "peer_address": { 00:12:24.826 "trtype": "TCP", 00:12:24.826 "adrfam": "IPv4", 00:12:24.826 "traddr": "10.0.0.1", 00:12:24.826 "trsvcid": "56238" 00:12:24.826 }, 00:12:24.826 "auth": { 00:12:24.826 "state": "completed", 00:12:24.826 "digest": "sha512", 00:12:24.826 "dhgroup": "ffdhe6144" 00:12:24.826 } 00:12:24.826 } 00:12:24.826 ]' 00:12:24.826 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.084 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.084 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.084 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:25.084 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.084 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.084 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.084 10:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.342 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:25.343 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:26.280 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.280 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:26.280 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.280 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.280 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.281 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.281 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:26.281 10:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.281 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.849 00:12:26.849 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.849 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.849 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.110 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.110 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.110 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.110 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.110 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.110 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.110 { 00:12:27.110 "cntlid": 131, 00:12:27.110 "qid": 0, 00:12:27.110 "state": "enabled", 00:12:27.110 "thread": "nvmf_tgt_poll_group_000", 00:12:27.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:27.110 "listen_address": { 00:12:27.110 "trtype": "TCP", 00:12:27.110 "adrfam": "IPv4", 00:12:27.110 "traddr": "10.0.0.3", 00:12:27.110 "trsvcid": "4420" 00:12:27.110 }, 00:12:27.110 "peer_address": { 00:12:27.110 "trtype": "TCP", 00:12:27.110 "adrfam": "IPv4", 00:12:27.110 "traddr": "10.0.0.1", 00:12:27.110 "trsvcid": "56244" 00:12:27.110 }, 00:12:27.110 "auth": { 00:12:27.110 "state": "completed", 00:12:27.110 "digest": "sha512", 00:12:27.110 "dhgroup": "ffdhe6144" 00:12:27.110 } 00:12:27.110 } 00:12:27.110 ]' 00:12:27.110 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.110 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.110 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.373 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:27.373 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.373 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.373 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.373 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.632 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:12:27.632 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:12:28.199 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.199 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:28.199 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.199 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.199 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.199 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.199 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.199 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.459 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.027 00:12:29.027 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.027 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.027 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.286 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.286 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.286 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.286 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.286 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.286 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.286 { 00:12:29.286 "cntlid": 133, 00:12:29.286 "qid": 0, 00:12:29.286 "state": "enabled", 00:12:29.286 "thread": "nvmf_tgt_poll_group_000", 00:12:29.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:29.286 "listen_address": { 00:12:29.286 "trtype": "TCP", 00:12:29.286 "adrfam": "IPv4", 00:12:29.286 "traddr": "10.0.0.3", 00:12:29.286 "trsvcid": "4420" 00:12:29.286 }, 00:12:29.286 "peer_address": { 00:12:29.286 "trtype": "TCP", 00:12:29.286 "adrfam": "IPv4", 00:12:29.286 "traddr": "10.0.0.1", 00:12:29.286 "trsvcid": "56282" 00:12:29.286 }, 00:12:29.286 "auth": { 00:12:29.286 "state": "completed", 00:12:29.286 "digest": "sha512", 00:12:29.286 "dhgroup": "ffdhe6144" 00:12:29.286 } 00:12:29.286 } 00:12:29.286 ]' 00:12:29.286 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.286 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.286 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.286 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:29.286 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.286 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.286 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.286 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.544 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:12:29.544 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:12:30.111 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.111 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:30.111 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.111 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.369 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.370 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.370 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:30.370 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.628 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.887 00:12:30.887 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.887 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.145 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.145 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.145 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.145 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.145 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.404 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.404 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.404 { 00:12:31.404 "cntlid": 135, 00:12:31.404 "qid": 0, 00:12:31.404 "state": "enabled", 00:12:31.404 "thread": "nvmf_tgt_poll_group_000", 00:12:31.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:31.404 "listen_address": { 00:12:31.404 "trtype": "TCP", 00:12:31.404 "adrfam": "IPv4", 00:12:31.404 "traddr": "10.0.0.3", 00:12:31.404 "trsvcid": "4420" 00:12:31.404 }, 00:12:31.404 "peer_address": { 00:12:31.404 "trtype": "TCP", 00:12:31.404 "adrfam": "IPv4", 00:12:31.404 "traddr": "10.0.0.1", 00:12:31.404 "trsvcid": "56308" 00:12:31.404 }, 00:12:31.404 "auth": { 00:12:31.404 "state": "completed", 00:12:31.404 "digest": "sha512", 00:12:31.404 "dhgroup": "ffdhe6144" 00:12:31.404 } 00:12:31.404 } 00:12:31.404 ]' 00:12:31.404 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.404 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.404 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.404 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:31.404 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.404 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.404 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.404 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.662 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:31.662 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.597 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.878 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.878 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.878 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.878 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.446 00:12:33.446 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.446 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.446 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.446 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.446 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.446 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.446 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.446 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.446 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.446 { 00:12:33.446 "cntlid": 137, 00:12:33.446 "qid": 0, 00:12:33.446 "state": "enabled", 00:12:33.446 "thread": "nvmf_tgt_poll_group_000", 00:12:33.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:33.446 "listen_address": { 00:12:33.446 "trtype": "TCP", 00:12:33.446 "adrfam": "IPv4", 00:12:33.446 "traddr": "10.0.0.3", 00:12:33.446 "trsvcid": "4420" 00:12:33.446 }, 00:12:33.446 "peer_address": { 00:12:33.446 "trtype": "TCP", 00:12:33.446 "adrfam": "IPv4", 00:12:33.446 "traddr": "10.0.0.1", 00:12:33.446 "trsvcid": "33246" 00:12:33.446 }, 00:12:33.446 "auth": { 00:12:33.446 "state": "completed", 00:12:33.446 "digest": "sha512", 00:12:33.446 "dhgroup": "ffdhe8192" 00:12:33.446 } 00:12:33.446 } 00:12:33.446 ]' 00:12:33.446 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.705 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.705 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.705 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.705 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.705 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.705 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.705 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.963 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:33.963 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:34.530 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.530 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:34.530 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.530 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.530 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.530 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.530 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:34.530 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.788 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.354 00:12:35.354 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.354 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.354 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.920 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.921 { 00:12:35.921 "cntlid": 139, 00:12:35.921 "qid": 0, 00:12:35.921 "state": "enabled", 00:12:35.921 "thread": "nvmf_tgt_poll_group_000", 00:12:35.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:35.921 "listen_address": { 00:12:35.921 "trtype": "TCP", 00:12:35.921 "adrfam": "IPv4", 00:12:35.921 "traddr": "10.0.0.3", 00:12:35.921 "trsvcid": "4420" 00:12:35.921 }, 00:12:35.921 "peer_address": { 00:12:35.921 "trtype": "TCP", 00:12:35.921 "adrfam": "IPv4", 00:12:35.921 "traddr": "10.0.0.1", 00:12:35.921 "trsvcid": "33274" 00:12:35.921 }, 00:12:35.921 "auth": { 00:12:35.921 "state": "completed", 00:12:35.921 "digest": "sha512", 00:12:35.921 "dhgroup": "ffdhe8192" 00:12:35.921 } 00:12:35.921 } 00:12:35.921 ]' 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.921 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.179 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:12:36.179 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: --dhchap-ctrl-secret DHHC-1:02:NDVhYTVhZjlmY2Y0ZDdlNzY3NmM0M2NhYzhjMWYwMWIyYjNkMDkwNjQxYWJhZTFjDYJZFw==: 00:12:36.788 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.788 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:36.788 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.788 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.788 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.788 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.788 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:36.788 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.047 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.982 00:12:37.982 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.982 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.982 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.982 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.982 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.982 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.982 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.982 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.982 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.982 { 00:12:37.982 "cntlid": 141, 00:12:37.982 "qid": 0, 00:12:37.982 "state": "enabled", 00:12:37.982 "thread": "nvmf_tgt_poll_group_000", 00:12:37.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:37.982 "listen_address": { 00:12:37.982 "trtype": "TCP", 00:12:37.982 "adrfam": "IPv4", 00:12:37.982 "traddr": "10.0.0.3", 00:12:37.982 "trsvcid": "4420" 00:12:37.982 }, 00:12:37.982 "peer_address": { 00:12:37.982 "trtype": "TCP", 00:12:37.982 "adrfam": "IPv4", 00:12:37.982 "traddr": "10.0.0.1", 00:12:37.982 "trsvcid": "33314" 00:12:37.982 }, 00:12:37.982 "auth": { 00:12:37.982 "state": "completed", 00:12:37.982 "digest": "sha512", 00:12:37.982 "dhgroup": "ffdhe8192" 00:12:37.982 } 00:12:37.982 } 00:12:37.982 ]' 00:12:37.982 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.242 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.242 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.242 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:38.242 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.242 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.242 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.242 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.500 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:12:38.500 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:01:ZDI4YjU0MDBhZDY2ZDYyM2JiOWVlNDdhOTE2YTFhZmKfhOZS: 00:12:39.436 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.436 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:39.436 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.436 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.436 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.436 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.436 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:39.436 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.695 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.262 00:12:40.262 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.262 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.262 10:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.521 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.521 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.521 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.521 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.521 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.521 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.521 { 00:12:40.521 "cntlid": 143, 00:12:40.521 "qid": 0, 00:12:40.521 "state": "enabled", 00:12:40.521 "thread": "nvmf_tgt_poll_group_000", 00:12:40.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:40.521 "listen_address": { 00:12:40.521 "trtype": "TCP", 00:12:40.521 "adrfam": "IPv4", 00:12:40.521 "traddr": "10.0.0.3", 00:12:40.521 "trsvcid": "4420" 00:12:40.521 }, 00:12:40.521 "peer_address": { 00:12:40.521 "trtype": "TCP", 00:12:40.521 "adrfam": "IPv4", 00:12:40.521 "traddr": "10.0.0.1", 00:12:40.521 "trsvcid": "33332" 00:12:40.521 }, 00:12:40.521 "auth": { 00:12:40.521 "state": "completed", 00:12:40.521 "digest": "sha512", 00:12:40.521 "dhgroup": "ffdhe8192" 00:12:40.521 } 00:12:40.521 } 00:12:40.521 ]' 00:12:40.521 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.521 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.521 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.521 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:40.521 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.780 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.780 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.780 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.780 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:40.780 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.716 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:41.717 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:41.717 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:41.717 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.717 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.717 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.717 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.975 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.975 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.975 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.975 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.542 00:12:42.542 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.542 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.542 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.801 { 00:12:42.801 "cntlid": 145, 00:12:42.801 "qid": 0, 00:12:42.801 "state": "enabled", 00:12:42.801 "thread": "nvmf_tgt_poll_group_000", 00:12:42.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:42.801 "listen_address": { 00:12:42.801 "trtype": "TCP", 00:12:42.801 "adrfam": "IPv4", 00:12:42.801 "traddr": "10.0.0.3", 00:12:42.801 "trsvcid": "4420" 00:12:42.801 }, 00:12:42.801 "peer_address": { 00:12:42.801 "trtype": "TCP", 00:12:42.801 "adrfam": "IPv4", 00:12:42.801 "traddr": "10.0.0.1", 00:12:42.801 "trsvcid": "49948" 00:12:42.801 }, 00:12:42.801 "auth": { 00:12:42.801 "state": "completed", 00:12:42.801 "digest": "sha512", 00:12:42.801 "dhgroup": "ffdhe8192" 00:12:42.801 } 00:12:42.801 } 00:12:42.801 ]' 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.801 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.368 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:43.368 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:00:ODQ0MDhmMjIwZThhYWFmYzNmNTA4NjJiOWExMzI1MGU2ZTZhZTUzZjUxYzI4MTA4LDyCxg==: --dhchap-ctrl-secret DHHC-1:03:NGQ1MmU3NmY0YWYxMGY5OGNmMDI5ODUyMTA1NzBjYzMyZTk3Y2I2NWUxMmJlMjUzZjgzNmIxYTUyMDg1OWQ0YarPIsE=: 00:12:43.936 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.936 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:43.936 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.936 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.936 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:43.937 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:44.505 request: 00:12:44.505 { 00:12:44.505 "name": "nvme0", 00:12:44.505 "trtype": "tcp", 00:12:44.505 "traddr": "10.0.0.3", 00:12:44.505 "adrfam": "ipv4", 00:12:44.505 "trsvcid": "4420", 00:12:44.505 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:44.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:44.505 "prchk_reftag": false, 00:12:44.505 "prchk_guard": false, 00:12:44.505 "hdgst": false, 00:12:44.505 "ddgst": false, 00:12:44.505 "dhchap_key": "key2", 00:12:44.505 "allow_unrecognized_csi": false, 00:12:44.505 "method": "bdev_nvme_attach_controller", 00:12:44.505 "req_id": 1 00:12:44.505 } 00:12:44.505 Got JSON-RPC error response 00:12:44.505 response: 00:12:44.505 { 00:12:44.505 "code": -5, 00:12:44.505 "message": "Input/output error" 00:12:44.505 } 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:44.505 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.506 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:44.506 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:44.506 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:45.074 request: 00:12:45.074 { 00:12:45.074 "name": "nvme0", 00:12:45.074 "trtype": "tcp", 00:12:45.074 "traddr": "10.0.0.3", 00:12:45.074 "adrfam": "ipv4", 00:12:45.074 "trsvcid": "4420", 00:12:45.074 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:45.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:45.074 "prchk_reftag": false, 00:12:45.074 "prchk_guard": false, 00:12:45.074 "hdgst": false, 00:12:45.074 "ddgst": false, 00:12:45.074 "dhchap_key": "key1", 00:12:45.074 "dhchap_ctrlr_key": "ckey2", 00:12:45.074 "allow_unrecognized_csi": false, 00:12:45.074 "method": "bdev_nvme_attach_controller", 00:12:45.074 "req_id": 1 00:12:45.074 } 00:12:45.074 Got JSON-RPC error response 00:12:45.074 response: 00:12:45.074 { 00:12:45.074 "code": -5, 00:12:45.074 "message": "Input/output error" 00:12:45.074 } 00:12:45.074 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:45.074 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:45.074 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:45.074 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:45.074 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:45.074 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.074 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.074 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.074 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 00:12:45.074 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.074 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.075 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.075 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.075 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:45.075 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.075 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:45.075 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.075 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:45.075 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.075 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.075 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.075 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.642 request: 00:12:45.642 { 00:12:45.642 "name": "nvme0", 00:12:45.642 "trtype": "tcp", 00:12:45.642 "traddr": "10.0.0.3", 00:12:45.642 "adrfam": "ipv4", 00:12:45.642 "trsvcid": "4420", 00:12:45.642 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:45.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:45.642 "prchk_reftag": false, 00:12:45.642 "prchk_guard": false, 00:12:45.642 "hdgst": false, 00:12:45.642 "ddgst": false, 00:12:45.642 "dhchap_key": "key1", 00:12:45.642 "dhchap_ctrlr_key": "ckey1", 00:12:45.642 "allow_unrecognized_csi": false, 00:12:45.642 "method": "bdev_nvme_attach_controller", 00:12:45.642 "req_id": 1 00:12:45.642 } 00:12:45.642 Got JSON-RPC error response 00:12:45.642 response: 00:12:45.642 { 00:12:45.642 "code": -5, 00:12:45.642 "message": "Input/output error" 00:12:45.642 } 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67142 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67142 ']' 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67142 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.642 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67142 00:12:45.901 killing process with pid 67142 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67142' 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67142 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67142 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70145 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70145 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70145 ']' 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.901 10:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.160 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.160 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:46.160 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.160 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.160 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.419 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.419 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:46.419 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70145 00:12:46.419 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70145 ']' 00:12:46.419 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.419 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.419 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.419 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.419 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.679 null0 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DZc 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.I3Q ]] 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.I3Q 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ovN 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.ENN ]] 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ENN 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xdp 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.679 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Q6x ]] 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q6x 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bO6 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:46.938 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.939 10:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.876 nvme0n1 00:12:47.876 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.876 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.876 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.134 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.134 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.134 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.134 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.134 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.134 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.134 { 00:12:48.134 "cntlid": 1, 00:12:48.134 "qid": 0, 00:12:48.134 "state": "enabled", 00:12:48.134 "thread": "nvmf_tgt_poll_group_000", 00:12:48.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:48.134 "listen_address": { 00:12:48.134 "trtype": "TCP", 00:12:48.134 "adrfam": "IPv4", 00:12:48.134 "traddr": "10.0.0.3", 00:12:48.134 "trsvcid": "4420" 00:12:48.134 }, 00:12:48.134 "peer_address": { 00:12:48.134 "trtype": "TCP", 00:12:48.134 "adrfam": "IPv4", 00:12:48.135 "traddr": "10.0.0.1", 00:12:48.135 "trsvcid": "49990" 00:12:48.135 }, 00:12:48.135 "auth": { 00:12:48.135 "state": "completed", 00:12:48.135 "digest": "sha512", 00:12:48.135 "dhgroup": "ffdhe8192" 00:12:48.135 } 00:12:48.135 } 00:12:48.135 ]' 00:12:48.135 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.135 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.135 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.135 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:48.135 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.135 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.135 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.135 10:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.394 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:48.394 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:48.959 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.217 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:49.217 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.217 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.217 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.217 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key3 00:12:49.217 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.217 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.217 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.217 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:49.217 10:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:49.476 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:49.476 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:49.476 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:49.476 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:49.476 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.476 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:49.476 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.476 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:49.476 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:49.476 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:49.734 request: 00:12:49.734 { 00:12:49.734 "name": "nvme0", 00:12:49.734 "trtype": "tcp", 00:12:49.734 "traddr": "10.0.0.3", 00:12:49.734 "adrfam": "ipv4", 00:12:49.734 "trsvcid": "4420", 00:12:49.734 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:49.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:49.734 "prchk_reftag": false, 00:12:49.734 "prchk_guard": false, 00:12:49.734 "hdgst": false, 00:12:49.734 "ddgst": false, 00:12:49.734 "dhchap_key": "key3", 00:12:49.734 "allow_unrecognized_csi": false, 00:12:49.734 "method": "bdev_nvme_attach_controller", 00:12:49.734 "req_id": 1 00:12:49.734 } 00:12:49.734 Got JSON-RPC error response 00:12:49.734 response: 00:12:49.734 { 00:12:49.734 "code": -5, 00:12:49.734 "message": "Input/output error" 00:12:49.734 } 00:12:49.734 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:49.734 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:49.734 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:49.734 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:49.734 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:49.734 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:49.734 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:49.734 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:49.994 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:49.995 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:49.995 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:49.995 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:49.995 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.995 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:49.995 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:49.995 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:49.995 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:49.995 10:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.266 request: 00:12:50.266 { 00:12:50.266 "name": "nvme0", 00:12:50.266 "trtype": "tcp", 00:12:50.267 "traddr": "10.0.0.3", 00:12:50.267 "adrfam": "ipv4", 00:12:50.267 "trsvcid": "4420", 00:12:50.267 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:50.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:50.267 "prchk_reftag": false, 00:12:50.267 "prchk_guard": false, 00:12:50.267 "hdgst": false, 00:12:50.267 "ddgst": false, 00:12:50.267 "dhchap_key": "key3", 00:12:50.267 "allow_unrecognized_csi": false, 00:12:50.267 "method": "bdev_nvme_attach_controller", 00:12:50.267 "req_id": 1 00:12:50.267 } 00:12:50.267 Got JSON-RPC error response 00:12:50.267 response: 00:12:50.267 { 00:12:50.267 "code": -5, 00:12:50.267 "message": "Input/output error" 00:12:50.267 } 00:12:50.267 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:50.267 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:50.267 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:50.267 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:50.267 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:50.267 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:50.267 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:50.267 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:50.267 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:50.267 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:50.526 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:50.526 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.526 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.526 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.526 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:50.526 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:50.527 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:51.094 request: 00:12:51.094 { 00:12:51.094 "name": "nvme0", 00:12:51.094 "trtype": "tcp", 00:12:51.094 "traddr": "10.0.0.3", 00:12:51.094 "adrfam": "ipv4", 00:12:51.094 "trsvcid": "4420", 00:12:51.094 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:51.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:51.094 "prchk_reftag": false, 00:12:51.094 "prchk_guard": false, 00:12:51.094 "hdgst": false, 00:12:51.094 "ddgst": false, 00:12:51.094 "dhchap_key": "key0", 00:12:51.094 "dhchap_ctrlr_key": "key1", 00:12:51.094 "allow_unrecognized_csi": false, 00:12:51.094 "method": "bdev_nvme_attach_controller", 00:12:51.094 "req_id": 1 00:12:51.094 } 00:12:51.094 Got JSON-RPC error response 00:12:51.094 response: 00:12:51.094 { 00:12:51.094 "code": -5, 00:12:51.094 "message": "Input/output error" 00:12:51.094 } 00:12:51.094 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:51.094 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:51.094 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:51.094 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:51.094 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:51.094 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:51.094 10:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:51.352 nvme0n1 00:12:51.352 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:51.352 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:51.352 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.610 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.610 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.610 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.869 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 00:12:51.869 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.869 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.869 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.869 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:51.869 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:51.869 10:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:52.805 nvme0n1 00:12:52.805 10:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:52.805 10:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:52.805 10:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.064 10:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.064 10:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:53.064 10:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.064 10:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.064 10:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.064 10:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:53.064 10:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.064 10:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:53.323 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.323 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:53.323 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid 02f14d39-9b07-4abc-bc4a-e88d43a336ca -l 0 --dhchap-secret DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: --dhchap-ctrl-secret DHHC-1:03:NzQ5ODMxYmRjYzRmOTMwOTM5MDZiYTJkOWE5ODBmOWI4NjY3ZmFhNjc5MTliNWJmNzg0ZjQ4ZWM4OWJmN2Y5MZ5XXyo=: 00:12:53.890 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:53.890 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:53.890 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:53.890 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:53.890 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:53.890 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:53.890 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:53.890 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.890 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.148 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:54.148 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:54.148 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:54.148 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:54.148 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.148 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:54.148 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.148 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:54.148 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:54.148 10:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:54.716 request: 00:12:54.716 { 00:12:54.716 "name": "nvme0", 00:12:54.716 "trtype": "tcp", 00:12:54.716 "traddr": "10.0.0.3", 00:12:54.716 "adrfam": "ipv4", 00:12:54.716 "trsvcid": "4420", 00:12:54.716 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:54.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca", 00:12:54.716 "prchk_reftag": false, 00:12:54.716 "prchk_guard": false, 00:12:54.716 "hdgst": false, 00:12:54.716 "ddgst": false, 00:12:54.716 "dhchap_key": "key1", 00:12:54.716 "allow_unrecognized_csi": false, 00:12:54.716 "method": "bdev_nvme_attach_controller", 00:12:54.716 "req_id": 1 00:12:54.716 } 00:12:54.716 Got JSON-RPC error response 00:12:54.716 response: 00:12:54.716 { 00:12:54.716 "code": -5, 00:12:54.716 "message": "Input/output error" 00:12:54.716 } 00:12:54.716 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:54.716 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:54.716 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:54.716 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:54.716 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:54.716 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:54.716 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:55.655 nvme0n1 00:12:55.655 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:55.655 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:55.655 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.914 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.914 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.914 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.172 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:12:56.172 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.172 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.172 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.172 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:56.172 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:56.172 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:56.431 nvme0n1 00:12:56.431 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:56.431 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:56.431 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.689 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.689 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.689 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: '' 2s 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: ]] 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NjM3YTMxZDFiZDhiODNkZjY3NTMwYjM4Mzg0YzQ5NWQJ3Ioy: 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:56.948 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: 2s 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: ]] 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZWY2M2U4YzhiNWEyNGYyOGI2MWNiZWE2M2FiZjA5ZGUwMTcxYzlhNWY5MTBjM2NkviDV6Q==: 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:59.480 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:01.386 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:01.955 nvme0n1 00:13:01.956 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:01.956 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.956 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.956 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.956 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:01.956 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:02.893 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:02.893 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.893 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:02.893 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.893 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:13:02.893 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.893 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.893 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.893 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:02.893 10:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:03.461 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:03.462 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.462 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:03.462 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.462 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:03.462 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:04.035 request: 00:13:04.035 { 00:13:04.035 "name": "nvme0", 00:13:04.035 "dhchap_key": "key1", 00:13:04.035 "dhchap_ctrlr_key": "key3", 00:13:04.035 "method": "bdev_nvme_set_keys", 00:13:04.035 "req_id": 1 00:13:04.035 } 00:13:04.035 Got JSON-RPC error response 00:13:04.035 response: 00:13:04.035 { 00:13:04.035 "code": -13, 00:13:04.035 "message": "Permission denied" 00:13:04.035 } 00:13:04.035 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:04.035 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.035 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.035 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.035 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:04.035 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:04.035 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.365 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:04.365 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:05.320 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:05.320 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.320 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:05.579 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:05.579 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:05.579 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.579 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.579 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.579 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:05.579 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:05.579 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:06.518 nvme0n1 00:13:06.518 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:06.518 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.518 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.518 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.518 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:06.518 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:06.518 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:06.519 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:06.519 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.519 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:06.519 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.519 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:06.519 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:07.086 request: 00:13:07.086 { 00:13:07.086 "name": "nvme0", 00:13:07.086 "dhchap_key": "key2", 00:13:07.086 "dhchap_ctrlr_key": "key0", 00:13:07.086 "method": "bdev_nvme_set_keys", 00:13:07.086 "req_id": 1 00:13:07.086 } 00:13:07.086 Got JSON-RPC error response 00:13:07.086 response: 00:13:07.086 { 00:13:07.086 "code": -13, 00:13:07.086 "message": "Permission denied" 00:13:07.086 } 00:13:07.086 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:07.086 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:07.086 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:07.086 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:07.086 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:07.086 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.086 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:07.653 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:07.653 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:08.590 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:08.590 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:08.590 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67166 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67166 ']' 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67166 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67166 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:08.849 killing process with pid 67166 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67166' 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67166 00:13:08.849 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67166 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:09.417 rmmod nvme_tcp 00:13:09.417 rmmod nvme_fabrics 00:13:09.417 rmmod nvme_keyring 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70145 ']' 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70145 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70145 ']' 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70145 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70145 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.417 killing process with pid 70145 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70145' 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70145 00:13:09.417 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70145 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:09.676 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DZc /tmp/spdk.key-sha256.ovN /tmp/spdk.key-sha384.xdp /tmp/spdk.key-sha512.bO6 /tmp/spdk.key-sha512.I3Q /tmp/spdk.key-sha384.ENN /tmp/spdk.key-sha256.Q6x '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:09.935 00:13:09.935 real 3m3.000s 00:13:09.935 user 7m18.554s 00:13:09.935 sys 0m28.499s 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.935 ************************************ 00:13:09.935 END TEST nvmf_auth_target 00:13:09.935 ************************************ 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.935 ************************************ 00:13:09.935 START TEST nvmf_bdevio_no_huge 00:13:09.935 ************************************ 00:13:09.935 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:10.195 * Looking for test storage... 00:13:10.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.195 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:10.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.196 --rc genhtml_branch_coverage=1 00:13:10.196 --rc genhtml_function_coverage=1 00:13:10.196 --rc genhtml_legend=1 00:13:10.196 --rc geninfo_all_blocks=1 00:13:10.196 --rc geninfo_unexecuted_blocks=1 00:13:10.196 00:13:10.196 ' 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:10.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.196 --rc genhtml_branch_coverage=1 00:13:10.196 --rc genhtml_function_coverage=1 00:13:10.196 --rc genhtml_legend=1 00:13:10.196 --rc geninfo_all_blocks=1 00:13:10.196 --rc geninfo_unexecuted_blocks=1 00:13:10.196 00:13:10.196 ' 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:10.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.196 --rc genhtml_branch_coverage=1 00:13:10.196 --rc genhtml_function_coverage=1 00:13:10.196 --rc genhtml_legend=1 00:13:10.196 --rc geninfo_all_blocks=1 00:13:10.196 --rc geninfo_unexecuted_blocks=1 00:13:10.196 00:13:10.196 ' 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:10.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.196 --rc genhtml_branch_coverage=1 00:13:10.196 --rc genhtml_function_coverage=1 00:13:10.196 --rc genhtml_legend=1 00:13:10.196 --rc geninfo_all_blocks=1 00:13:10.196 --rc geninfo_unexecuted_blocks=1 00:13:10.196 00:13:10.196 ' 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:10.196 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:10.196 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:10.197 Cannot find device "nvmf_init_br" 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:10.197 Cannot find device "nvmf_init_br2" 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:10.197 Cannot find device "nvmf_tgt_br" 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:10.197 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:10.197 Cannot find device "nvmf_tgt_br2" 00:13:10.197 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:10.197 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:10.197 Cannot find device "nvmf_init_br" 00:13:10.197 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:10.197 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:10.197 Cannot find device "nvmf_init_br2" 00:13:10.197 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:10.197 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:10.197 Cannot find device "nvmf_tgt_br" 00:13:10.197 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:10.197 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:10.456 Cannot find device "nvmf_tgt_br2" 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:10.456 Cannot find device "nvmf_br" 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:10.456 Cannot find device "nvmf_init_if" 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:10.456 Cannot find device "nvmf_init_if2" 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:10.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:10.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:10.456 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:10.714 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:10.714 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:10.714 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:10.714 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:10.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:10.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:13:10.715 00:13:10.715 --- 10.0.0.3 ping statistics --- 00:13:10.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.715 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:10.715 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:10.715 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:13:10.715 00:13:10.715 --- 10.0.0.4 ping statistics --- 00:13:10.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.715 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:10.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:13:10.715 00:13:10.715 --- 10.0.0.1 ping statistics --- 00:13:10.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.715 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:10.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:13:10.715 00:13:10.715 --- 10.0.0.2 ping statistics --- 00:13:10.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.715 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70776 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70776 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70776 ']' 00:13:10.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.715 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:10.715 [2024-11-15 10:56:57.425871] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:10.715 [2024-11-15 10:56:57.425961] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:10.974 [2024-11-15 10:56:57.591471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.974 [2024-11-15 10:56:57.672197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.974 [2024-11-15 10:56:57.672775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.974 [2024-11-15 10:56:57.673290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.974 [2024-11-15 10:56:57.673707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.974 [2024-11-15 10:56:57.674030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.974 [2024-11-15 10:56:57.675031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:10.974 [2024-11-15 10:56:57.675121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:10.974 [2024-11-15 10:56:57.675253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.974 [2024-11-15 10:56:57.675242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:10.974 [2024-11-15 10:56:57.682124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:11.910 [2024-11-15 10:56:58.514985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:11.910 Malloc0 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.910 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:11.910 [2024-11-15 10:56:58.555095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:11.911 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.911 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:11.911 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:11.911 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:13:11.911 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:13:11.911 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:11.911 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:11.911 { 00:13:11.911 "params": { 00:13:11.911 "name": "Nvme$subsystem", 00:13:11.911 "trtype": "$TEST_TRANSPORT", 00:13:11.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:11.911 "adrfam": "ipv4", 00:13:11.911 "trsvcid": "$NVMF_PORT", 00:13:11.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:11.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:11.911 "hdgst": ${hdgst:-false}, 00:13:11.911 "ddgst": ${ddgst:-false} 00:13:11.911 }, 00:13:11.911 "method": "bdev_nvme_attach_controller" 00:13:11.911 } 00:13:11.911 EOF 00:13:11.911 )") 00:13:11.911 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:13:11.911 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:13:11.911 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:13:11.911 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:11.911 "params": { 00:13:11.911 "name": "Nvme1", 00:13:11.911 "trtype": "tcp", 00:13:11.911 "traddr": "10.0.0.3", 00:13:11.911 "adrfam": "ipv4", 00:13:11.911 "trsvcid": "4420", 00:13:11.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:11.911 "hdgst": false, 00:13:11.911 "ddgst": false 00:13:11.911 }, 00:13:11.911 "method": "bdev_nvme_attach_controller" 00:13:11.911 }' 00:13:11.911 [2024-11-15 10:56:58.617110] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:11.911 [2024-11-15 10:56:58.617202] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70812 ] 00:13:12.170 [2024-11-15 10:56:58.777996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:12.170 [2024-11-15 10:56:58.840166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.170 [2024-11-15 10:56:58.840307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.170 [2024-11-15 10:56:58.840310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.170 [2024-11-15 10:56:58.853484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:12.429 I/O targets: 00:13:12.429 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:12.429 00:13:12.429 00:13:12.429 CUnit - A unit testing framework for C - Version 2.1-3 00:13:12.429 http://cunit.sourceforge.net/ 00:13:12.429 00:13:12.429 00:13:12.429 Suite: bdevio tests on: Nvme1n1 00:13:12.429 Test: blockdev write read block ...passed 00:13:12.429 Test: blockdev write zeroes read block ...passed 00:13:12.429 Test: blockdev write zeroes read no split ...passed 00:13:12.429 Test: blockdev write zeroes read split ...passed 00:13:12.429 Test: blockdev write zeroes read split partial ...passed 00:13:12.429 Test: blockdev reset ...[2024-11-15 10:56:59.082024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:12.429 [2024-11-15 10:56:59.082381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xace310 (9): Bad file descriptor 00:13:12.429 [2024-11-15 10:56:59.096826] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:12.429 passed 00:13:12.429 Test: blockdev write read 8 blocks ...passed 00:13:12.429 Test: blockdev write read size > 128k ...passed 00:13:12.429 Test: blockdev write read invalid size ...passed 00:13:12.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:12.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:12.429 Test: blockdev write read max offset ...passed 00:13:12.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:12.429 Test: blockdev writev readv 8 blocks ...passed 00:13:12.429 Test: blockdev writev readv 30 x 1block ...passed 00:13:12.429 Test: blockdev writev readv block ...passed 00:13:12.429 Test: blockdev writev readv size > 128k ...passed 00:13:12.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:12.429 Test: blockdev comparev and writev ...[2024-11-15 10:56:59.107777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.429 [2024-11-15 10:56:59.108024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:12.429 [2024-11-15 10:56:59.108071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.429 [2024-11-15 10:56:59.108086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:12.429 [2024-11-15 10:56:59.108402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.429 [2024-11-15 10:56:59.108425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:12.429 [2024-11-15 10:56:59.108447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.429 [2024-11-15 10:56:59.108459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:12.429 [2024-11-15 10:56:59.108779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.429 [2024-11-15 10:56:59.108801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:12.430 [2024-11-15 10:56:59.108822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.430 [2024-11-15 10:56:59.108835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:12.430 [2024-11-15 10:56:59.109126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.430 [2024-11-15 10:56:59.109153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:12.430 [2024-11-15 10:56:59.109175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:12.430 [2024-11-15 10:56:59.109188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:12.430 passed 00:13:12.430 Test: blockdev nvme passthru rw ...passed 00:13:12.430 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:56:59.110264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:12.430 [2024-11-15 10:56:59.110306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:12.430 [2024-11-15 10:56:59.110440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:12.430 [2024-11-15 10:56:59.110461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:12.430 [2024-11-15 10:56:59.110593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:12.430 [2024-11-15 10:56:59.110614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:12.430 [2024-11-15 10:56:59.110726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:12.430 passed 00:13:12.430 Test: blockdev nvme admin passthru ...[2024-11-15 10:56:59.110745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:12.430 passed 00:13:12.430 Test: blockdev copy ...passed 00:13:12.430 00:13:12.430 Run Summary: Type Total Ran Passed Failed Inactive 00:13:12.430 suites 1 1 n/a 0 0 00:13:12.430 tests 23 23 23 0 0 00:13:12.430 asserts 152 152 152 0 n/a 00:13:12.430 00:13:12.430 Elapsed time = 0.170 seconds 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:12.689 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:12.689 rmmod nvme_tcp 00:13:12.689 rmmod nvme_fabrics 00:13:12.948 rmmod nvme_keyring 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70776 ']' 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70776 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70776 ']' 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70776 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70776 00:13:12.948 killing process with pid 70776 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70776' 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70776 00:13:12.948 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70776 00:13:13.207 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:13.207 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:13.207 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:13.207 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:13.208 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:13:13.208 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:13.208 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:13:13.208 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:13.208 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:13.208 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:13.208 10:56:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:13.208 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:13.208 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:13.208 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:13.208 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:13.208 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:13.208 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:13.208 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:13.467 00:13:13.467 real 0m3.443s 00:13:13.467 user 0m10.583s 00:13:13.467 sys 0m1.344s 00:13:13.467 ************************************ 00:13:13.467 END TEST nvmf_bdevio_no_huge 00:13:13.467 ************************************ 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.467 ************************************ 00:13:13.467 START TEST nvmf_tls 00:13:13.467 ************************************ 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:13.467 * Looking for test storage... 00:13:13.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:13:13.467 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.726 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:13.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.726 --rc genhtml_branch_coverage=1 00:13:13.726 --rc genhtml_function_coverage=1 00:13:13.727 --rc genhtml_legend=1 00:13:13.727 --rc geninfo_all_blocks=1 00:13:13.727 --rc geninfo_unexecuted_blocks=1 00:13:13.727 00:13:13.727 ' 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:13.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.727 --rc genhtml_branch_coverage=1 00:13:13.727 --rc genhtml_function_coverage=1 00:13:13.727 --rc genhtml_legend=1 00:13:13.727 --rc geninfo_all_blocks=1 00:13:13.727 --rc geninfo_unexecuted_blocks=1 00:13:13.727 00:13:13.727 ' 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:13.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.727 --rc genhtml_branch_coverage=1 00:13:13.727 --rc genhtml_function_coverage=1 00:13:13.727 --rc genhtml_legend=1 00:13:13.727 --rc geninfo_all_blocks=1 00:13:13.727 --rc geninfo_unexecuted_blocks=1 00:13:13.727 00:13:13.727 ' 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:13.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.727 --rc genhtml_branch_coverage=1 00:13:13.727 --rc genhtml_function_coverage=1 00:13:13.727 --rc genhtml_legend=1 00:13:13.727 --rc geninfo_all_blocks=1 00:13:13.727 --rc geninfo_unexecuted_blocks=1 00:13:13.727 00:13:13.727 ' 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.727 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:13.727 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:13.727 Cannot find device "nvmf_init_br" 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:13.728 Cannot find device "nvmf_init_br2" 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:13.728 Cannot find device "nvmf_tgt_br" 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:13.728 Cannot find device "nvmf_tgt_br2" 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:13.728 Cannot find device "nvmf_init_br" 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:13.728 Cannot find device "nvmf_init_br2" 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:13.728 Cannot find device "nvmf_tgt_br" 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:13.728 Cannot find device "nvmf_tgt_br2" 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:13.728 Cannot find device "nvmf_br" 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:13.728 Cannot find device "nvmf_init_if" 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:13.728 Cannot find device "nvmf_init_if2" 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:13.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:13.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:13.728 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:13.987 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:13.987 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:13:13.987 00:13:13.987 --- 10.0.0.3 ping statistics --- 00:13:13.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.987 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:13.987 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:13.987 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:13:13.987 00:13:13.987 --- 10.0.0.4 ping statistics --- 00:13:13.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.987 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:13.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:13.987 00:13:13.987 --- 10.0.0.1 ping statistics --- 00:13:13.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.987 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:13.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:13:13.987 00:13:13.987 --- 10.0.0.2 ping statistics --- 00:13:13.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.987 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:13.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71049 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71049 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71049 ']' 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.987 10:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:14.245 [2024-11-15 10:57:00.890178] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:14.245 [2024-11-15 10:57:00.890437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.245 [2024-11-15 10:57:01.046103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.503 [2024-11-15 10:57:01.106786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.503 [2024-11-15 10:57:01.106856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.503 [2024-11-15 10:57:01.106872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.503 [2024-11-15 10:57:01.106882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.503 [2024-11-15 10:57:01.106891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.503 [2024-11-15 10:57:01.107348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.503 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.503 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:14.503 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:14.504 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:14.504 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:14.504 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.504 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:14.504 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:14.762 true 00:13:14.762 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:14.762 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:15.021 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:15.021 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:15.021 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:15.279 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:15.279 10:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:15.538 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:15.538 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:15.538 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:15.796 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:15.796 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:16.055 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:16.055 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:16.055 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:16.055 10:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:16.313 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:16.314 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:16.314 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:16.573 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:16.573 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:16.832 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:16.832 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:16.832 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:17.091 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:17.091 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:17.365 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:17.365 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:17.365 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:17.365 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:17.365 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:17.365 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:17.365 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:17.365 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:17.365 10:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.07qtf3bDHm 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.JUluD6zCsF 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.07qtf3bDHm 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.JUluD6zCsF 00:13:17.365 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:17.638 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:17.896 [2024-11-15 10:57:04.739843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:18.155 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.07qtf3bDHm 00:13:18.155 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.07qtf3bDHm 00:13:18.155 10:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:18.155 [2024-11-15 10:57:05.010523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.413 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:18.413 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:18.672 [2024-11-15 10:57:05.446589] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:18.672 [2024-11-15 10:57:05.446851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:18.672 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:18.931 malloc0 00:13:18.931 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:19.190 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.07qtf3bDHm 00:13:19.449 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:19.708 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.07qtf3bDHm 00:13:31.917 Initializing NVMe Controllers 00:13:31.917 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:31.917 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:31.917 Initialization complete. Launching workers. 00:13:31.917 ======================================================== 00:13:31.917 Latency(us) 00:13:31.917 Device Information : IOPS MiB/s Average min max 00:13:31.917 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11010.56 43.01 5813.77 1231.82 6832.88 00:13:31.917 ======================================================== 00:13:31.917 Total : 11010.56 43.01 5813.77 1231.82 6832.88 00:13:31.917 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.07qtf3bDHm 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.07qtf3bDHm 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71274 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71274 /var/tmp/bdevperf.sock 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71274 ']' 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:31.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.917 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.917 [2024-11-15 10:57:16.662742] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:31.917 [2024-11-15 10:57:16.663283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71274 ] 00:13:31.917 [2024-11-15 10:57:16.813344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.917 [2024-11-15 10:57:16.866653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.918 [2024-11-15 10:57:16.924188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:31.918 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.918 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:31.918 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.07qtf3bDHm 00:13:31.918 10:57:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:31.918 [2024-11-15 10:57:18.148763] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:31.918 TLSTESTn1 00:13:31.918 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:31.918 Running I/O for 10 seconds... 00:13:33.792 4513.00 IOPS, 17.63 MiB/s [2024-11-15T10:57:21.590Z] 4551.50 IOPS, 17.78 MiB/s [2024-11-15T10:57:22.569Z] 4568.33 IOPS, 17.85 MiB/s [2024-11-15T10:57:23.506Z] 4574.25 IOPS, 17.87 MiB/s [2024-11-15T10:57:24.442Z] 4582.40 IOPS, 17.90 MiB/s [2024-11-15T10:57:25.377Z] 4581.50 IOPS, 17.90 MiB/s [2024-11-15T10:57:26.755Z] 4581.57 IOPS, 17.90 MiB/s [2024-11-15T10:57:27.692Z] 4580.38 IOPS, 17.89 MiB/s [2024-11-15T10:57:28.630Z] 4584.33 IOPS, 17.91 MiB/s [2024-11-15T10:57:28.630Z] 4585.20 IOPS, 17.91 MiB/s 00:13:41.769 Latency(us) 00:13:41.769 [2024-11-15T10:57:28.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.770 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:41.770 Verification LBA range: start 0x0 length 0x2000 00:13:41.770 TLSTESTn1 : 10.01 4591.55 17.94 0.00 0.00 27831.89 4408.79 23592.96 00:13:41.770 [2024-11-15T10:57:28.631Z] =================================================================================================================== 00:13:41.770 [2024-11-15T10:57:28.631Z] Total : 4591.55 17.94 0.00 0.00 27831.89 4408.79 23592.96 00:13:41.770 { 00:13:41.770 "results": [ 00:13:41.770 { 00:13:41.770 "job": "TLSTESTn1", 00:13:41.770 "core_mask": "0x4", 00:13:41.770 "workload": "verify", 00:13:41.770 "status": "finished", 00:13:41.770 "verify_range": { 00:13:41.770 "start": 0, 00:13:41.770 "length": 8192 00:13:41.770 }, 00:13:41.770 "queue_depth": 128, 00:13:41.770 "io_size": 4096, 00:13:41.770 "runtime": 10.013621, 00:13:41.770 "iops": 4591.545855390373, 00:13:41.770 "mibps": 17.935725997618643, 00:13:41.770 "io_failed": 0, 00:13:41.770 "io_timeout": 0, 00:13:41.770 "avg_latency_us": 27831.894897401522, 00:13:41.770 "min_latency_us": 4408.785454545455, 00:13:41.770 "max_latency_us": 23592.96 00:13:41.770 } 00:13:41.770 ], 00:13:41.770 "core_count": 1 00:13:41.770 } 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71274 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71274 ']' 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71274 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71274 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71274' 00:13:41.770 killing process with pid 71274 00:13:41.770 Received shutdown signal, test time was about 10.000000 seconds 00:13:41.770 00:13:41.770 Latency(us) 00:13:41.770 [2024-11-15T10:57:28.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.770 [2024-11-15T10:57:28.631Z] =================================================================================================================== 00:13:41.770 [2024-11-15T10:57:28.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71274 00:13:41.770 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71274 00:13:42.028 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JUluD6zCsF 00:13:42.028 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:42.028 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JUluD6zCsF 00:13:42.028 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:42.028 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.028 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JUluD6zCsF 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JUluD6zCsF 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71413 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71413 /var/tmp/bdevperf.sock 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71413 ']' 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.029 10:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.029 [2024-11-15 10:57:28.696328] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:42.029 [2024-11-15 10:57:28.696813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71413 ] 00:13:42.029 [2024-11-15 10:57:28.842960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.288 [2024-11-15 10:57:28.896153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.288 [2024-11-15 10:57:28.950438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:42.288 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.288 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:42.288 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JUluD6zCsF 00:13:42.547 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:42.807 [2024-11-15 10:57:29.543642] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:42.807 [2024-11-15 10:57:29.549896] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:42.808 [2024-11-15 10:57:29.550287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7fb0 (107): Transport endpoint is not connected 00:13:42.808 [2024-11-15 10:57:29.551278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea7fb0 (9): Bad file descriptor 00:13:42.808 [2024-11-15 10:57:29.552275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:42.808 [2024-11-15 10:57:29.552302] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:42.808 [2024-11-15 10:57:29.552313] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:42.808 [2024-11-15 10:57:29.552329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:42.808 request: 00:13:42.808 { 00:13:42.808 "name": "TLSTEST", 00:13:42.808 "trtype": "tcp", 00:13:42.808 "traddr": "10.0.0.3", 00:13:42.808 "adrfam": "ipv4", 00:13:42.808 "trsvcid": "4420", 00:13:42.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.808 "prchk_reftag": false, 00:13:42.808 "prchk_guard": false, 00:13:42.808 "hdgst": false, 00:13:42.808 "ddgst": false, 00:13:42.808 "psk": "key0", 00:13:42.808 "allow_unrecognized_csi": false, 00:13:42.808 "method": "bdev_nvme_attach_controller", 00:13:42.808 "req_id": 1 00:13:42.808 } 00:13:42.808 Got JSON-RPC error response 00:13:42.808 response: 00:13:42.808 { 00:13:42.808 "code": -5, 00:13:42.808 "message": "Input/output error" 00:13:42.808 } 00:13:42.808 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71413 00:13:42.808 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71413 ']' 00:13:42.808 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71413 00:13:42.808 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:42.808 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.808 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71413 00:13:42.808 killing process with pid 71413 00:13:42.808 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.808 00:13:42.808 Latency(us) 00:13:42.808 [2024-11-15T10:57:29.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.808 [2024-11-15T10:57:29.669Z] =================================================================================================================== 00:13:42.808 [2024-11-15T10:57:29.669Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:42.808 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:42.808 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:42.808 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71413' 00:13:42.808 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71413 00:13:42.808 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71413 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.07qtf3bDHm 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.07qtf3bDHm 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.07qtf3bDHm 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.07qtf3bDHm 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71434 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71434 /var/tmp/bdevperf.sock 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71434 ']' 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:43.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.068 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.068 [2024-11-15 10:57:29.837658] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:43.068 [2024-11-15 10:57:29.837890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71434 ] 00:13:43.327 [2024-11-15 10:57:29.978312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.327 [2024-11-15 10:57:30.030333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.327 [2024-11-15 10:57:30.085157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:44.266 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.266 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:44.266 10:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.07qtf3bDHm 00:13:44.266 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:44.525 [2024-11-15 10:57:31.197514] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:44.525 [2024-11-15 10:57:31.202328] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:44.525 [2024-11-15 10:57:31.202367] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:44.525 [2024-11-15 10:57:31.202433] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:44.525 [2024-11-15 10:57:31.203092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2309fb0 (107): Transport endpoint is not connected 00:13:44.525 [2024-11-15 10:57:31.204079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2309fb0 (9): Bad file descriptor 00:13:44.525 [2024-11-15 10:57:31.205075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:44.525 [2024-11-15 10:57:31.205096] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:44.525 [2024-11-15 10:57:31.205122] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:44.525 [2024-11-15 10:57:31.205136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:44.525 request: 00:13:44.525 { 00:13:44.525 "name": "TLSTEST", 00:13:44.525 "trtype": "tcp", 00:13:44.525 "traddr": "10.0.0.3", 00:13:44.525 "adrfam": "ipv4", 00:13:44.525 "trsvcid": "4420", 00:13:44.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.525 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:44.525 "prchk_reftag": false, 00:13:44.525 "prchk_guard": false, 00:13:44.525 "hdgst": false, 00:13:44.525 "ddgst": false, 00:13:44.525 "psk": "key0", 00:13:44.526 "allow_unrecognized_csi": false, 00:13:44.526 "method": "bdev_nvme_attach_controller", 00:13:44.526 "req_id": 1 00:13:44.526 } 00:13:44.526 Got JSON-RPC error response 00:13:44.526 response: 00:13:44.526 { 00:13:44.526 "code": -5, 00:13:44.526 "message": "Input/output error" 00:13:44.526 } 00:13:44.526 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71434 00:13:44.526 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71434 ']' 00:13:44.526 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71434 00:13:44.526 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:44.526 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.526 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71434 00:13:44.526 killing process with pid 71434 00:13:44.526 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.526 00:13:44.526 Latency(us) 00:13:44.526 [2024-11-15T10:57:31.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.526 [2024-11-15T10:57:31.387Z] =================================================================================================================== 00:13:44.526 [2024-11-15T10:57:31.387Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.526 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:44.526 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:44.526 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71434' 00:13:44.526 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71434 00:13:44.526 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71434 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.07qtf3bDHm 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.07qtf3bDHm 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.07qtf3bDHm 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.07qtf3bDHm 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71464 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71464 /var/tmp/bdevperf.sock 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71464 ']' 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:44.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.785 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.785 [2024-11-15 10:57:31.492340] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:44.785 [2024-11-15 10:57:31.492434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71464 ] 00:13:44.785 [2024-11-15 10:57:31.631425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.070 [2024-11-15 10:57:31.678561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.070 [2024-11-15 10:57:31.732372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:45.070 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.070 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:45.070 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.07qtf3bDHm 00:13:45.329 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:45.589 [2024-11-15 10:57:32.206807] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:45.589 [2024-11-15 10:57:32.211838] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:45.589 [2024-11-15 10:57:32.211878] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:45.589 [2024-11-15 10:57:32.211945] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:45.589 [2024-11-15 10:57:32.212524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2526fb0 (107): Transport endpoint is not connected 00:13:45.589 [2024-11-15 10:57:32.213525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2526fb0 (9): Bad file descriptor 00:13:45.589 [2024-11-15 10:57:32.214507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:45.589 [2024-11-15 10:57:32.214548] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:45.589 [2024-11-15 10:57:32.214560] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:45.589 [2024-11-15 10:57:32.214574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:45.589 request: 00:13:45.589 { 00:13:45.589 "name": "TLSTEST", 00:13:45.589 "trtype": "tcp", 00:13:45.589 "traddr": "10.0.0.3", 00:13:45.589 "adrfam": "ipv4", 00:13:45.589 "trsvcid": "4420", 00:13:45.589 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:45.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.589 "prchk_reftag": false, 00:13:45.589 "prchk_guard": false, 00:13:45.589 "hdgst": false, 00:13:45.590 "ddgst": false, 00:13:45.590 "psk": "key0", 00:13:45.590 "allow_unrecognized_csi": false, 00:13:45.590 "method": "bdev_nvme_attach_controller", 00:13:45.590 "req_id": 1 00:13:45.590 } 00:13:45.590 Got JSON-RPC error response 00:13:45.590 response: 00:13:45.590 { 00:13:45.590 "code": -5, 00:13:45.590 "message": "Input/output error" 00:13:45.590 } 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71464 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71464 ']' 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71464 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71464 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:45.590 killing process with pid 71464 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71464' 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71464 00:13:45.590 Received shutdown signal, test time was about 10.000000 seconds 00:13:45.590 00:13:45.590 Latency(us) 00:13:45.590 [2024-11-15T10:57:32.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.590 [2024-11-15T10:57:32.451Z] =================================================================================================================== 00:13:45.590 [2024-11-15T10:57:32.451Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71464 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.590 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71485 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71485 /var/tmp/bdevperf.sock 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71485 ']' 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:45.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.850 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:45.850 [2024-11-15 10:57:32.498631] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:45.850 [2024-11-15 10:57:32.498723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71485 ] 00:13:45.850 [2024-11-15 10:57:32.640643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.850 [2024-11-15 10:57:32.690139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.109 [2024-11-15 10:57:32.744726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:46.109 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.109 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:46.109 10:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:46.368 [2024-11-15 10:57:33.004700] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:46.368 [2024-11-15 10:57:33.004761] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:46.368 request: 00:13:46.368 { 00:13:46.368 "name": "key0", 00:13:46.368 "path": "", 00:13:46.368 "method": "keyring_file_add_key", 00:13:46.368 "req_id": 1 00:13:46.368 } 00:13:46.368 Got JSON-RPC error response 00:13:46.368 response: 00:13:46.368 { 00:13:46.368 "code": -1, 00:13:46.368 "message": "Operation not permitted" 00:13:46.368 } 00:13:46.368 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:46.368 [2024-11-15 10:57:33.212894] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:46.368 [2024-11-15 10:57:33.213014] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:46.368 request: 00:13:46.368 { 00:13:46.368 "name": "TLSTEST", 00:13:46.368 "trtype": "tcp", 00:13:46.368 "traddr": "10.0.0.3", 00:13:46.368 "adrfam": "ipv4", 00:13:46.368 "trsvcid": "4420", 00:13:46.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.368 "prchk_reftag": false, 00:13:46.368 "prchk_guard": false, 00:13:46.368 "hdgst": false, 00:13:46.368 "ddgst": false, 00:13:46.368 "psk": "key0", 00:13:46.368 "allow_unrecognized_csi": false, 00:13:46.368 "method": "bdev_nvme_attach_controller", 00:13:46.368 "req_id": 1 00:13:46.368 } 00:13:46.368 Got JSON-RPC error response 00:13:46.368 response: 00:13:46.368 { 00:13:46.368 "code": -126, 00:13:46.368 "message": "Required key not available" 00:13:46.368 } 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71485 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71485 ']' 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71485 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71485 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71485' 00:13:46.628 killing process with pid 71485 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71485 00:13:46.628 Received shutdown signal, test time was about 10.000000 seconds 00:13:46.628 00:13:46.628 Latency(us) 00:13:46.628 [2024-11-15T10:57:33.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.628 [2024-11-15T10:57:33.489Z] =================================================================================================================== 00:13:46.628 [2024-11-15T10:57:33.489Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71485 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71049 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71049 ']' 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71049 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71049 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:46.628 killing process with pid 71049 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71049' 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71049 00:13:46.628 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71049 00:13:46.888 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:46.888 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:46.888 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:46.888 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:46.888 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:46.888 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:46.888 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.cv7zr8cbIf 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.cv7zr8cbIf 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71516 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71516 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71516 ']' 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.148 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.148 [2024-11-15 10:57:33.836335] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:47.148 [2024-11-15 10:57:33.836433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.148 [2024-11-15 10:57:33.977136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.407 [2024-11-15 10:57:34.035081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.407 [2024-11-15 10:57:34.035138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.407 [2024-11-15 10:57:34.035149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.407 [2024-11-15 10:57:34.035156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.407 [2024-11-15 10:57:34.035163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.407 [2024-11-15 10:57:34.035560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.407 [2024-11-15 10:57:34.103210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:47.976 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.976 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:47.976 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.976 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:47.976 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.976 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.976 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.cv7zr8cbIf 00:13:47.976 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cv7zr8cbIf 00:13:47.976 10:57:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:48.234 [2024-11-15 10:57:34.988985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.234 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:48.493 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:48.752 [2024-11-15 10:57:35.501107] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:48.752 [2024-11-15 10:57:35.501351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:48.752 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:49.016 malloc0 00:13:49.016 10:57:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:49.276 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cv7zr8cbIf 00:13:49.535 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cv7zr8cbIf 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cv7zr8cbIf 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71576 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71576 /var/tmp/bdevperf.sock 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71576 ']' 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.793 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.793 [2024-11-15 10:57:36.570248] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:49.793 [2024-11-15 10:57:36.570336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71576 ] 00:13:50.052 [2024-11-15 10:57:36.720192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.052 [2024-11-15 10:57:36.782811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.052 [2024-11-15 10:57:36.840936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:50.052 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.052 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:50.052 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cv7zr8cbIf 00:13:50.311 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:50.571 [2024-11-15 10:57:37.315929] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:50.571 TLSTESTn1 00:13:50.571 10:57:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:50.829 Running I/O for 10 seconds... 00:13:52.699 4469.00 IOPS, 17.46 MiB/s [2024-11-15T10:57:40.939Z] 4499.00 IOPS, 17.57 MiB/s [2024-11-15T10:57:41.509Z] 4514.33 IOPS, 17.63 MiB/s [2024-11-15T10:57:42.884Z] 4543.75 IOPS, 17.75 MiB/s [2024-11-15T10:57:43.821Z] 4540.60 IOPS, 17.74 MiB/s [2024-11-15T10:57:44.756Z] 4555.33 IOPS, 17.79 MiB/s [2024-11-15T10:57:45.691Z] 4552.86 IOPS, 17.78 MiB/s [2024-11-15T10:57:46.628Z] 4551.75 IOPS, 17.78 MiB/s [2024-11-15T10:57:47.564Z] 4548.00 IOPS, 17.77 MiB/s [2024-11-15T10:57:47.565Z] 4545.50 IOPS, 17.76 MiB/s 00:14:00.704 Latency(us) 00:14:00.704 [2024-11-15T10:57:47.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.704 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:00.704 Verification LBA range: start 0x0 length 0x2000 00:14:00.704 TLSTESTn1 : 10.01 4552.08 17.78 0.00 0.00 28073.48 4081.11 23592.96 00:14:00.704 [2024-11-15T10:57:47.565Z] =================================================================================================================== 00:14:00.704 [2024-11-15T10:57:47.565Z] Total : 4552.08 17.78 0.00 0.00 28073.48 4081.11 23592.96 00:14:00.704 { 00:14:00.704 "results": [ 00:14:00.704 { 00:14:00.704 "job": "TLSTESTn1", 00:14:00.704 "core_mask": "0x4", 00:14:00.704 "workload": "verify", 00:14:00.704 "status": "finished", 00:14:00.704 "verify_range": { 00:14:00.704 "start": 0, 00:14:00.704 "length": 8192 00:14:00.704 }, 00:14:00.704 "queue_depth": 128, 00:14:00.704 "io_size": 4096, 00:14:00.704 "runtime": 10.013662, 00:14:00.704 "iops": 4552.080947010195, 00:14:00.704 "mibps": 17.781566199258574, 00:14:00.704 "io_failed": 0, 00:14:00.704 "io_timeout": 0, 00:14:00.704 "avg_latency_us": 28073.484862977228, 00:14:00.704 "min_latency_us": 4081.1054545454544, 00:14:00.704 "max_latency_us": 23592.96 00:14:00.704 } 00:14:00.704 ], 00:14:00.704 "core_count": 1 00:14:00.704 } 00:14:00.704 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:00.704 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71576 00:14:00.704 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71576 ']' 00:14:00.704 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71576 00:14:00.704 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:00.704 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.704 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71576 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:00.963 killing process with pid 71576 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71576' 00:14:00.963 Received shutdown signal, test time was about 10.000000 seconds 00:14:00.963 00:14:00.963 Latency(us) 00:14:00.963 [2024-11-15T10:57:47.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.963 [2024-11-15T10:57:47.824Z] =================================================================================================================== 00:14:00.963 [2024-11-15T10:57:47.824Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71576 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71576 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.cv7zr8cbIf 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cv7zr8cbIf 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cv7zr8cbIf 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cv7zr8cbIf 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.cv7zr8cbIf 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71700 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71700 /var/tmp/bdevperf.sock 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71700 ']' 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.963 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.963 [2024-11-15 10:57:47.821450] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:00.963 [2024-11-15 10:57:47.821579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71700 ] 00:14:01.222 [2024-11-15 10:57:47.965647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.223 [2024-11-15 10:57:48.004071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.223 [2024-11-15 10:57:48.058359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:01.481 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.481 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:01.481 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cv7zr8cbIf 00:14:01.740 [2024-11-15 10:57:48.394284] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cv7zr8cbIf': 0100666 00:14:01.740 [2024-11-15 10:57:48.394348] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:01.740 request: 00:14:01.740 { 00:14:01.740 "name": "key0", 00:14:01.740 "path": "/tmp/tmp.cv7zr8cbIf", 00:14:01.740 "method": "keyring_file_add_key", 00:14:01.740 "req_id": 1 00:14:01.740 } 00:14:01.740 Got JSON-RPC error response 00:14:01.740 response: 00:14:01.740 { 00:14:01.740 "code": -1, 00:14:01.740 "message": "Operation not permitted" 00:14:01.740 } 00:14:01.740 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:01.999 [2024-11-15 10:57:48.658445] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:02.000 [2024-11-15 10:57:48.658518] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:02.000 request: 00:14:02.000 { 00:14:02.000 "name": "TLSTEST", 00:14:02.000 "trtype": "tcp", 00:14:02.000 "traddr": "10.0.0.3", 00:14:02.000 "adrfam": "ipv4", 00:14:02.000 "trsvcid": "4420", 00:14:02.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.000 "prchk_reftag": false, 00:14:02.000 "prchk_guard": false, 00:14:02.000 "hdgst": false, 00:14:02.000 "ddgst": false, 00:14:02.000 "psk": "key0", 00:14:02.000 "allow_unrecognized_csi": false, 00:14:02.000 "method": "bdev_nvme_attach_controller", 00:14:02.000 "req_id": 1 00:14:02.000 } 00:14:02.000 Got JSON-RPC error response 00:14:02.000 response: 00:14:02.000 { 00:14:02.000 "code": -126, 00:14:02.000 "message": "Required key not available" 00:14:02.000 } 00:14:02.000 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71700 00:14:02.000 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71700 ']' 00:14:02.000 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71700 00:14:02.000 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:02.000 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.000 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71700 00:14:02.000 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:02.000 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:02.000 killing process with pid 71700 00:14:02.000 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71700' 00:14:02.000 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71700 00:14:02.000 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.000 00:14:02.000 Latency(us) 00:14:02.000 [2024-11-15T10:57:48.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.000 [2024-11-15T10:57:48.861Z] =================================================================================================================== 00:14:02.000 [2024-11-15T10:57:48.861Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:02.000 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71700 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71516 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71516 ']' 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71516 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71516 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:02.259 killing process with pid 71516 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71516' 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71516 00:14:02.259 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71516 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71731 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71731 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71731 ']' 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.518 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.518 [2024-11-15 10:57:49.224652] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:02.518 [2024-11-15 10:57:49.224734] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.518 [2024-11-15 10:57:49.365347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.776 [2024-11-15 10:57:49.414732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.776 [2024-11-15 10:57:49.414792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.776 [2024-11-15 10:57:49.414802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.776 [2024-11-15 10:57:49.414811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.776 [2024-11-15 10:57:49.414817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.776 [2024-11-15 10:57:49.415210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.776 [2024-11-15 10:57:49.484007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:02.776 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.776 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:02.776 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.776 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.cv7zr8cbIf 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.cv7zr8cbIf 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.cv7zr8cbIf 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cv7zr8cbIf 00:14:02.777 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:03.035 [2024-11-15 10:57:49.809830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.035 10:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:03.294 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:03.560 [2024-11-15 10:57:50.237012] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:03.560 [2024-11-15 10:57:50.237295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:03.560 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:03.873 malloc0 00:14:03.873 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:04.130 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cv7zr8cbIf 00:14:04.131 [2024-11-15 10:57:50.952007] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cv7zr8cbIf': 0100666 00:14:04.131 [2024-11-15 10:57:50.952077] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:04.131 request: 00:14:04.131 { 00:14:04.131 "name": "key0", 00:14:04.131 "path": "/tmp/tmp.cv7zr8cbIf", 00:14:04.131 "method": "keyring_file_add_key", 00:14:04.131 "req_id": 1 00:14:04.131 } 00:14:04.131 Got JSON-RPC error response 00:14:04.131 response: 00:14:04.131 { 00:14:04.131 "code": -1, 00:14:04.131 "message": "Operation not permitted" 00:14:04.131 } 00:14:04.131 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:04.390 [2024-11-15 10:57:51.148078] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:04.390 [2024-11-15 10:57:51.148188] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:04.390 request: 00:14:04.390 { 00:14:04.390 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.390 "host": "nqn.2016-06.io.spdk:host1", 00:14:04.390 "psk": "key0", 00:14:04.390 "method": "nvmf_subsystem_add_host", 00:14:04.390 "req_id": 1 00:14:04.390 } 00:14:04.390 Got JSON-RPC error response 00:14:04.390 response: 00:14:04.390 { 00:14:04.390 "code": -32603, 00:14:04.390 "message": "Internal error" 00:14:04.390 } 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71731 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71731 ']' 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71731 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71731 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:04.390 killing process with pid 71731 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71731' 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71731 00:14:04.390 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71731 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.cv7zr8cbIf 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71787 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71787 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71787 ']' 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.648 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.648 [2024-11-15 10:57:51.460271] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:04.648 [2024-11-15 10:57:51.460378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.907 [2024-11-15 10:57:51.604682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.907 [2024-11-15 10:57:51.662358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.907 [2024-11-15 10:57:51.662424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.907 [2024-11-15 10:57:51.662450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.907 [2024-11-15 10:57:51.662459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.907 [2024-11-15 10:57:51.662466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.907 [2024-11-15 10:57:51.662896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.907 [2024-11-15 10:57:51.716515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:05.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:05.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:05.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:05.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.cv7zr8cbIf 00:14:05.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cv7zr8cbIf 00:14:05.166 10:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:05.425 [2024-11-15 10:57:52.038590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.425 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:05.684 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:05.943 [2024-11-15 10:57:52.570816] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:05.943 [2024-11-15 10:57:52.571086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:05.943 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:05.943 malloc0 00:14:06.202 10:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:06.202 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cv7zr8cbIf 00:14:06.461 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:06.720 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71830 00:14:06.720 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:06.720 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:06.720 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71830 /var/tmp/bdevperf.sock 00:14:06.720 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71830 ']' 00:14:06.720 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.720 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.720 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.720 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.720 10:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.720 [2024-11-15 10:57:53.524956] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:06.720 [2024-11-15 10:57:53.525033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71830 ] 00:14:06.979 [2024-11-15 10:57:53.661179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.979 [2024-11-15 10:57:53.712503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.979 [2024-11-15 10:57:53.781703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:07.916 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.916 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:07.916 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cv7zr8cbIf 00:14:07.916 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:08.175 [2024-11-15 10:57:54.863706] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.175 TLSTESTn1 00:14:08.175 10:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:08.435 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:08.435 "subsystems": [ 00:14:08.435 { 00:14:08.435 "subsystem": "keyring", 00:14:08.435 "config": [ 00:14:08.435 { 00:14:08.435 "method": "keyring_file_add_key", 00:14:08.435 "params": { 00:14:08.435 "name": "key0", 00:14:08.435 "path": "/tmp/tmp.cv7zr8cbIf" 00:14:08.435 } 00:14:08.435 } 00:14:08.435 ] 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "subsystem": "iobuf", 00:14:08.435 "config": [ 00:14:08.435 { 00:14:08.435 "method": "iobuf_set_options", 00:14:08.435 "params": { 00:14:08.435 "small_pool_count": 8192, 00:14:08.435 "large_pool_count": 1024, 00:14:08.435 "small_bufsize": 8192, 00:14:08.435 "large_bufsize": 135168, 00:14:08.435 "enable_numa": false 00:14:08.435 } 00:14:08.435 } 00:14:08.435 ] 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "subsystem": "sock", 00:14:08.435 "config": [ 00:14:08.435 { 00:14:08.435 "method": "sock_set_default_impl", 00:14:08.435 "params": { 00:14:08.435 "impl_name": "uring" 00:14:08.435 } 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "method": "sock_impl_set_options", 00:14:08.435 "params": { 00:14:08.435 "impl_name": "ssl", 00:14:08.435 "recv_buf_size": 4096, 00:14:08.435 "send_buf_size": 4096, 00:14:08.435 "enable_recv_pipe": true, 00:14:08.435 "enable_quickack": false, 00:14:08.435 "enable_placement_id": 0, 00:14:08.435 "enable_zerocopy_send_server": true, 00:14:08.435 "enable_zerocopy_send_client": false, 00:14:08.435 "zerocopy_threshold": 0, 00:14:08.435 "tls_version": 0, 00:14:08.435 "enable_ktls": false 00:14:08.435 } 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "method": "sock_impl_set_options", 00:14:08.435 "params": { 00:14:08.435 "impl_name": "posix", 00:14:08.435 "recv_buf_size": 2097152, 00:14:08.435 "send_buf_size": 2097152, 00:14:08.435 "enable_recv_pipe": true, 00:14:08.435 "enable_quickack": false, 00:14:08.435 "enable_placement_id": 0, 00:14:08.435 "enable_zerocopy_send_server": true, 00:14:08.435 "enable_zerocopy_send_client": false, 00:14:08.435 "zerocopy_threshold": 0, 00:14:08.435 "tls_version": 0, 00:14:08.435 "enable_ktls": false 00:14:08.435 } 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "method": "sock_impl_set_options", 00:14:08.435 "params": { 00:14:08.435 "impl_name": "uring", 00:14:08.435 "recv_buf_size": 2097152, 00:14:08.435 "send_buf_size": 2097152, 00:14:08.435 "enable_recv_pipe": true, 00:14:08.435 "enable_quickack": false, 00:14:08.435 "enable_placement_id": 0, 00:14:08.435 "enable_zerocopy_send_server": false, 00:14:08.435 "enable_zerocopy_send_client": false, 00:14:08.435 "zerocopy_threshold": 0, 00:14:08.435 "tls_version": 0, 00:14:08.435 "enable_ktls": false 00:14:08.435 } 00:14:08.435 } 00:14:08.435 ] 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "subsystem": "vmd", 00:14:08.435 "config": [] 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "subsystem": "accel", 00:14:08.435 "config": [ 00:14:08.435 { 00:14:08.435 "method": "accel_set_options", 00:14:08.435 "params": { 00:14:08.435 "small_cache_size": 128, 00:14:08.435 "large_cache_size": 16, 00:14:08.435 "task_count": 2048, 00:14:08.435 "sequence_count": 2048, 00:14:08.435 "buf_count": 2048 00:14:08.435 } 00:14:08.435 } 00:14:08.435 ] 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "subsystem": "bdev", 00:14:08.435 "config": [ 00:14:08.435 { 00:14:08.435 "method": "bdev_set_options", 00:14:08.435 "params": { 00:14:08.435 "bdev_io_pool_size": 65535, 00:14:08.435 "bdev_io_cache_size": 256, 00:14:08.435 "bdev_auto_examine": true, 00:14:08.435 "iobuf_small_cache_size": 128, 00:14:08.435 "iobuf_large_cache_size": 16 00:14:08.435 } 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "method": "bdev_raid_set_options", 00:14:08.435 "params": { 00:14:08.435 "process_window_size_kb": 1024, 00:14:08.435 "process_max_bandwidth_mb_sec": 0 00:14:08.435 } 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "method": "bdev_iscsi_set_options", 00:14:08.435 "params": { 00:14:08.435 "timeout_sec": 30 00:14:08.435 } 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "method": "bdev_nvme_set_options", 00:14:08.435 "params": { 00:14:08.435 "action_on_timeout": "none", 00:14:08.435 "timeout_us": 0, 00:14:08.435 "timeout_admin_us": 0, 00:14:08.435 "keep_alive_timeout_ms": 10000, 00:14:08.435 "arbitration_burst": 0, 00:14:08.435 "low_priority_weight": 0, 00:14:08.435 "medium_priority_weight": 0, 00:14:08.435 "high_priority_weight": 0, 00:14:08.435 "nvme_adminq_poll_period_us": 10000, 00:14:08.435 "nvme_ioq_poll_period_us": 0, 00:14:08.435 "io_queue_requests": 0, 00:14:08.435 "delay_cmd_submit": true, 00:14:08.435 "transport_retry_count": 4, 00:14:08.435 "bdev_retry_count": 3, 00:14:08.435 "transport_ack_timeout": 0, 00:14:08.435 "ctrlr_loss_timeout_sec": 0, 00:14:08.435 "reconnect_delay_sec": 0, 00:14:08.435 "fast_io_fail_timeout_sec": 0, 00:14:08.435 "disable_auto_failback": false, 00:14:08.435 "generate_uuids": false, 00:14:08.435 "transport_tos": 0, 00:14:08.435 "nvme_error_stat": false, 00:14:08.435 "rdma_srq_size": 0, 00:14:08.435 "io_path_stat": false, 00:14:08.435 "allow_accel_sequence": false, 00:14:08.435 "rdma_max_cq_size": 0, 00:14:08.435 "rdma_cm_event_timeout_ms": 0, 00:14:08.435 "dhchap_digests": [ 00:14:08.435 "sha256", 00:14:08.435 "sha384", 00:14:08.435 "sha512" 00:14:08.435 ], 00:14:08.435 "dhchap_dhgroups": [ 00:14:08.435 "null", 00:14:08.435 "ffdhe2048", 00:14:08.435 "ffdhe3072", 00:14:08.435 "ffdhe4096", 00:14:08.435 "ffdhe6144", 00:14:08.435 "ffdhe8192" 00:14:08.435 ] 00:14:08.435 } 00:14:08.435 }, 00:14:08.435 { 00:14:08.435 "method": "bdev_nvme_set_hotplug", 00:14:08.435 "params": { 00:14:08.435 "period_us": 100000, 00:14:08.435 "enable": false 00:14:08.435 } 00:14:08.435 }, 00:14:08.436 { 00:14:08.436 "method": "bdev_malloc_create", 00:14:08.436 "params": { 00:14:08.436 "name": "malloc0", 00:14:08.436 "num_blocks": 8192, 00:14:08.436 "block_size": 4096, 00:14:08.436 "physical_block_size": 4096, 00:14:08.436 "uuid": "c8104c14-b38a-4ce5-bfb5-9f78344ebdf7", 00:14:08.436 "optimal_io_boundary": 0, 00:14:08.436 "md_size": 0, 00:14:08.436 "dif_type": 0, 00:14:08.436 "dif_is_head_of_md": false, 00:14:08.436 "dif_pi_format": 0 00:14:08.436 } 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "method": "bdev_wait_for_examine" 00:14:08.436 } 00:14:08.436 ] 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "subsystem": "nbd", 00:14:08.436 "config": [] 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "subsystem": "scheduler", 00:14:08.436 "config": [ 00:14:08.436 { 00:14:08.436 "method": "framework_set_scheduler", 00:14:08.436 "params": { 00:14:08.436 "name": "static" 00:14:08.436 } 00:14:08.436 } 00:14:08.436 ] 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "subsystem": "nvmf", 00:14:08.436 "config": [ 00:14:08.436 { 00:14:08.436 "method": "nvmf_set_config", 00:14:08.436 "params": { 00:14:08.436 "discovery_filter": "match_any", 00:14:08.436 "admin_cmd_passthru": { 00:14:08.436 "identify_ctrlr": false 00:14:08.436 }, 00:14:08.436 "dhchap_digests": [ 00:14:08.436 "sha256", 00:14:08.436 "sha384", 00:14:08.436 "sha512" 00:14:08.436 ], 00:14:08.436 "dhchap_dhgroups": [ 00:14:08.436 "null", 00:14:08.436 "ffdhe2048", 00:14:08.436 "ffdhe3072", 00:14:08.436 "ffdhe4096", 00:14:08.436 "ffdhe6144", 00:14:08.436 "ffdhe8192" 00:14:08.436 ] 00:14:08.436 } 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "method": "nvmf_set_max_subsystems", 00:14:08.436 "params": { 00:14:08.436 "max_subsystems": 1024 00:14:08.436 } 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "method": "nvmf_set_crdt", 00:14:08.436 "params": { 00:14:08.436 "crdt1": 0, 00:14:08.436 "crdt2": 0, 00:14:08.436 "crdt3": 0 00:14:08.436 } 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "method": "nvmf_create_transport", 00:14:08.436 "params": { 00:14:08.436 "trtype": "TCP", 00:14:08.436 "max_queue_depth": 128, 00:14:08.436 "max_io_qpairs_per_ctrlr": 127, 00:14:08.436 "in_capsule_data_size": 4096, 00:14:08.436 "max_io_size": 131072, 00:14:08.436 "io_unit_size": 131072, 00:14:08.436 "max_aq_depth": 128, 00:14:08.436 "num_shared_buffers": 511, 00:14:08.436 "buf_cache_size": 4294967295, 00:14:08.436 "dif_insert_or_strip": false, 00:14:08.436 "zcopy": false, 00:14:08.436 "c2h_success": false, 00:14:08.436 "sock_priority": 0, 00:14:08.436 "abort_timeout_sec": 1, 00:14:08.436 "ack_timeout": 0, 00:14:08.436 "data_wr_pool_size": 0 00:14:08.436 } 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "method": "nvmf_create_subsystem", 00:14:08.436 "params": { 00:14:08.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.436 "allow_any_host": false, 00:14:08.436 "serial_number": "SPDK00000000000001", 00:14:08.436 "model_number": "SPDK bdev Controller", 00:14:08.436 "max_namespaces": 10, 00:14:08.436 "min_cntlid": 1, 00:14:08.436 "max_cntlid": 65519, 00:14:08.436 "ana_reporting": false 00:14:08.436 } 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "method": "nvmf_subsystem_add_host", 00:14:08.436 "params": { 00:14:08.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.436 "host": "nqn.2016-06.io.spdk:host1", 00:14:08.436 "psk": "key0" 00:14:08.436 } 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "method": "nvmf_subsystem_add_ns", 00:14:08.436 "params": { 00:14:08.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.436 "namespace": { 00:14:08.436 "nsid": 1, 00:14:08.436 "bdev_name": "malloc0", 00:14:08.436 "nguid": "C8104C14B38A4CE5BFB59F78344EBDF7", 00:14:08.436 "uuid": "c8104c14-b38a-4ce5-bfb5-9f78344ebdf7", 00:14:08.436 "no_auto_visible": false 00:14:08.436 } 00:14:08.436 } 00:14:08.436 }, 00:14:08.436 { 00:14:08.436 "method": "nvmf_subsystem_add_listener", 00:14:08.436 "params": { 00:14:08.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.436 "listen_address": { 00:14:08.436 "trtype": "TCP", 00:14:08.436 "adrfam": "IPv4", 00:14:08.436 "traddr": "10.0.0.3", 00:14:08.436 "trsvcid": "4420" 00:14:08.436 }, 00:14:08.436 "secure_channel": true 00:14:08.436 } 00:14:08.436 } 00:14:08.436 ] 00:14:08.436 } 00:14:08.436 ] 00:14:08.436 }' 00:14:08.436 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:09.005 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:09.005 "subsystems": [ 00:14:09.005 { 00:14:09.005 "subsystem": "keyring", 00:14:09.005 "config": [ 00:14:09.005 { 00:14:09.005 "method": "keyring_file_add_key", 00:14:09.005 "params": { 00:14:09.005 "name": "key0", 00:14:09.005 "path": "/tmp/tmp.cv7zr8cbIf" 00:14:09.005 } 00:14:09.005 } 00:14:09.005 ] 00:14:09.005 }, 00:14:09.005 { 00:14:09.005 "subsystem": "iobuf", 00:14:09.005 "config": [ 00:14:09.005 { 00:14:09.005 "method": "iobuf_set_options", 00:14:09.005 "params": { 00:14:09.005 "small_pool_count": 8192, 00:14:09.005 "large_pool_count": 1024, 00:14:09.005 "small_bufsize": 8192, 00:14:09.005 "large_bufsize": 135168, 00:14:09.005 "enable_numa": false 00:14:09.005 } 00:14:09.005 } 00:14:09.005 ] 00:14:09.005 }, 00:14:09.005 { 00:14:09.005 "subsystem": "sock", 00:14:09.005 "config": [ 00:14:09.005 { 00:14:09.005 "method": "sock_set_default_impl", 00:14:09.005 "params": { 00:14:09.005 "impl_name": "uring" 00:14:09.005 } 00:14:09.005 }, 00:14:09.005 { 00:14:09.005 "method": "sock_impl_set_options", 00:14:09.005 "params": { 00:14:09.005 "impl_name": "ssl", 00:14:09.005 "recv_buf_size": 4096, 00:14:09.005 "send_buf_size": 4096, 00:14:09.005 "enable_recv_pipe": true, 00:14:09.005 "enable_quickack": false, 00:14:09.005 "enable_placement_id": 0, 00:14:09.005 "enable_zerocopy_send_server": true, 00:14:09.005 "enable_zerocopy_send_client": false, 00:14:09.005 "zerocopy_threshold": 0, 00:14:09.005 "tls_version": 0, 00:14:09.005 "enable_ktls": false 00:14:09.005 } 00:14:09.005 }, 00:14:09.005 { 00:14:09.005 "method": "sock_impl_set_options", 00:14:09.005 "params": { 00:14:09.005 "impl_name": "posix", 00:14:09.005 "recv_buf_size": 2097152, 00:14:09.005 "send_buf_size": 2097152, 00:14:09.005 "enable_recv_pipe": true, 00:14:09.005 "enable_quickack": false, 00:14:09.005 "enable_placement_id": 0, 00:14:09.005 "enable_zerocopy_send_server": true, 00:14:09.005 "enable_zerocopy_send_client": false, 00:14:09.005 "zerocopy_threshold": 0, 00:14:09.005 "tls_version": 0, 00:14:09.006 "enable_ktls": false 00:14:09.006 } 00:14:09.006 }, 00:14:09.006 { 00:14:09.006 "method": "sock_impl_set_options", 00:14:09.006 "params": { 00:14:09.006 "impl_name": "uring", 00:14:09.006 "recv_buf_size": 2097152, 00:14:09.006 "send_buf_size": 2097152, 00:14:09.006 "enable_recv_pipe": true, 00:14:09.006 "enable_quickack": false, 00:14:09.006 "enable_placement_id": 0, 00:14:09.006 "enable_zerocopy_send_server": false, 00:14:09.006 "enable_zerocopy_send_client": false, 00:14:09.006 "zerocopy_threshold": 0, 00:14:09.006 "tls_version": 0, 00:14:09.006 "enable_ktls": false 00:14:09.006 } 00:14:09.006 } 00:14:09.006 ] 00:14:09.006 }, 00:14:09.006 { 00:14:09.006 "subsystem": "vmd", 00:14:09.006 "config": [] 00:14:09.006 }, 00:14:09.006 { 00:14:09.006 "subsystem": "accel", 00:14:09.006 "config": [ 00:14:09.006 { 00:14:09.006 "method": "accel_set_options", 00:14:09.006 "params": { 00:14:09.006 "small_cache_size": 128, 00:14:09.006 "large_cache_size": 16, 00:14:09.006 "task_count": 2048, 00:14:09.006 "sequence_count": 2048, 00:14:09.006 "buf_count": 2048 00:14:09.006 } 00:14:09.006 } 00:14:09.006 ] 00:14:09.006 }, 00:14:09.006 { 00:14:09.006 "subsystem": "bdev", 00:14:09.006 "config": [ 00:14:09.006 { 00:14:09.006 "method": "bdev_set_options", 00:14:09.006 "params": { 00:14:09.006 "bdev_io_pool_size": 65535, 00:14:09.006 "bdev_io_cache_size": 256, 00:14:09.006 "bdev_auto_examine": true, 00:14:09.006 "iobuf_small_cache_size": 128, 00:14:09.006 "iobuf_large_cache_size": 16 00:14:09.006 } 00:14:09.006 }, 00:14:09.006 { 00:14:09.006 "method": "bdev_raid_set_options", 00:14:09.006 "params": { 00:14:09.006 "process_window_size_kb": 1024, 00:14:09.006 "process_max_bandwidth_mb_sec": 0 00:14:09.006 } 00:14:09.006 }, 00:14:09.006 { 00:14:09.006 "method": "bdev_iscsi_set_options", 00:14:09.006 "params": { 00:14:09.006 "timeout_sec": 30 00:14:09.006 } 00:14:09.006 }, 00:14:09.006 { 00:14:09.006 "method": "bdev_nvme_set_options", 00:14:09.006 "params": { 00:14:09.006 "action_on_timeout": "none", 00:14:09.006 "timeout_us": 0, 00:14:09.006 "timeout_admin_us": 0, 00:14:09.006 "keep_alive_timeout_ms": 10000, 00:14:09.006 "arbitration_burst": 0, 00:14:09.006 "low_priority_weight": 0, 00:14:09.006 "medium_priority_weight": 0, 00:14:09.006 "high_priority_weight": 0, 00:14:09.006 "nvme_adminq_poll_period_us": 10000, 00:14:09.006 "nvme_ioq_poll_period_us": 0, 00:14:09.006 "io_queue_requests": 512, 00:14:09.006 "delay_cmd_submit": true, 00:14:09.006 "transport_retry_count": 4, 00:14:09.006 "bdev_retry_count": 3, 00:14:09.006 "transport_ack_timeout": 0, 00:14:09.006 "ctrlr_loss_timeout_sec": 0, 00:14:09.006 "reconnect_delay_sec": 0, 00:14:09.006 "fast_io_fail_timeout_sec": 0, 00:14:09.006 "disable_auto_failback": false, 00:14:09.006 "generate_uuids": false, 00:14:09.006 "transport_tos": 0, 00:14:09.006 "nvme_error_stat": false, 00:14:09.006 "rdma_srq_size": 0, 00:14:09.006 "io_path_stat": false, 00:14:09.006 "allow_accel_sequence": false, 00:14:09.006 "rdma_max_cq_size": 0, 00:14:09.006 "rdma_cm_event_timeout_ms": 0, 00:14:09.006 "dhchap_digests": [ 00:14:09.006 "sha256", 00:14:09.006 "sha384", 00:14:09.006 "sha512" 00:14:09.006 ], 00:14:09.006 "dhchap_dhgroups": [ 00:14:09.006 "null", 00:14:09.006 "ffdhe2048", 00:14:09.006 "ffdhe3072", 00:14:09.006 "ffdhe4096", 00:14:09.006 "ffdhe6144", 00:14:09.006 "ffdhe8192" 00:14:09.006 ] 00:14:09.006 } 00:14:09.006 }, 00:14:09.006 { 00:14:09.006 "method": "bdev_nvme_attach_controller", 00:14:09.006 "params": { 00:14:09.006 "name": "TLSTEST", 00:14:09.006 "trtype": "TCP", 00:14:09.006 "adrfam": "IPv4", 00:14:09.006 "traddr": "10.0.0.3", 00:14:09.006 "trsvcid": "4420", 00:14:09.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.006 "prchk_reftag": false, 00:14:09.006 "prchk_guard": false, 00:14:09.006 "ctrlr_loss_timeout_sec": 0, 00:14:09.006 "reconnect_delay_sec": 0, 00:14:09.006 "fast_io_fail_timeout_sec": 0, 00:14:09.006 "psk": "key0", 00:14:09.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:09.006 "hdgst": false, 00:14:09.006 "ddgst": false, 00:14:09.006 "multipath": "multipath" 00:14:09.006 } 00:14:09.006 }, 00:14:09.006 { 00:14:09.006 "method": "bdev_nvme_set_hotplug", 00:14:09.006 "params": { 00:14:09.006 "period_us": 100000, 00:14:09.006 "enable": false 00:14:09.006 } 00:14:09.006 }, 00:14:09.006 { 00:14:09.006 "method": "bdev_wait_for_examine" 00:14:09.006 } 00:14:09.006 ] 00:14:09.006 }, 00:14:09.006 { 00:14:09.006 "subsystem": "nbd", 00:14:09.006 "config": [] 00:14:09.006 } 00:14:09.006 ] 00:14:09.006 }' 00:14:09.006 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71830 00:14:09.006 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71830 ']' 00:14:09.006 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71830 00:14:09.006 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:09.006 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.006 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71830 00:14:09.006 killing process with pid 71830 00:14:09.006 Received shutdown signal, test time was about 10.000000 seconds 00:14:09.006 00:14:09.006 Latency(us) 00:14:09.006 [2024-11-15T10:57:55.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.006 [2024-11-15T10:57:55.867Z] =================================================================================================================== 00:14:09.006 [2024-11-15T10:57:55.867Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:09.006 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:09.006 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:09.006 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71830' 00:14:09.006 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71830 00:14:09.006 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71830 00:14:09.266 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71787 00:14:09.266 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71787 ']' 00:14:09.266 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71787 00:14:09.266 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:09.266 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.266 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71787 00:14:09.266 killing process with pid 71787 00:14:09.266 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:09.266 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:09.266 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71787' 00:14:09.266 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71787 00:14:09.266 10:57:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71787 00:14:09.526 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:09.526 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:09.526 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.526 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.526 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:09.526 "subsystems": [ 00:14:09.526 { 00:14:09.526 "subsystem": "keyring", 00:14:09.526 "config": [ 00:14:09.526 { 00:14:09.526 "method": "keyring_file_add_key", 00:14:09.526 "params": { 00:14:09.526 "name": "key0", 00:14:09.526 "path": "/tmp/tmp.cv7zr8cbIf" 00:14:09.526 } 00:14:09.526 } 00:14:09.526 ] 00:14:09.526 }, 00:14:09.526 { 00:14:09.526 "subsystem": "iobuf", 00:14:09.526 "config": [ 00:14:09.526 { 00:14:09.526 "method": "iobuf_set_options", 00:14:09.526 "params": { 00:14:09.526 "small_pool_count": 8192, 00:14:09.526 "large_pool_count": 1024, 00:14:09.526 "small_bufsize": 8192, 00:14:09.526 "large_bufsize": 135168, 00:14:09.526 "enable_numa": false 00:14:09.526 } 00:14:09.526 } 00:14:09.526 ] 00:14:09.526 }, 00:14:09.526 { 00:14:09.526 "subsystem": "sock", 00:14:09.526 "config": [ 00:14:09.526 { 00:14:09.526 "method": "sock_set_default_impl", 00:14:09.526 "params": { 00:14:09.526 "impl_name": "uring" 00:14:09.526 } 00:14:09.526 }, 00:14:09.526 { 00:14:09.526 "method": "sock_impl_set_options", 00:14:09.526 "params": { 00:14:09.526 "impl_name": "ssl", 00:14:09.526 "recv_buf_size": 4096, 00:14:09.526 "send_buf_size": 4096, 00:14:09.526 "enable_recv_pipe": true, 00:14:09.526 "enable_quickack": false, 00:14:09.526 "enable_placement_id": 0, 00:14:09.526 "enable_zerocopy_send_server": true, 00:14:09.526 "enable_zerocopy_send_client": false, 00:14:09.526 "zerocopy_threshold": 0, 00:14:09.526 "tls_version": 0, 00:14:09.526 "enable_ktls": false 00:14:09.526 } 00:14:09.526 }, 00:14:09.526 { 00:14:09.526 "method": "sock_impl_set_options", 00:14:09.526 "params": { 00:14:09.526 "impl_name": "posix", 00:14:09.526 "recv_buf_size": 2097152, 00:14:09.526 "send_buf_size": 2097152, 00:14:09.526 "enable_recv_pipe": true, 00:14:09.526 "enable_quickack": false, 00:14:09.526 "enable_placement_id": 0, 00:14:09.526 "enable_zerocopy_send_server": true, 00:14:09.526 "enable_zerocopy_send_client": false, 00:14:09.526 "zerocopy_threshold": 0, 00:14:09.526 "tls_version": 0, 00:14:09.526 "enable_ktls": false 00:14:09.526 } 00:14:09.526 }, 00:14:09.526 { 00:14:09.526 "method": "sock_impl_set_options", 00:14:09.526 "params": { 00:14:09.526 "impl_name": "uring", 00:14:09.526 "recv_buf_size": 2097152, 00:14:09.526 "send_buf_size": 2097152, 00:14:09.526 "enable_recv_pipe": true, 00:14:09.526 "enable_quickack": false, 00:14:09.526 "enable_placement_id": 0, 00:14:09.526 "enable_zerocopy_send_server": false, 00:14:09.526 "enable_zerocopy_send_client": false, 00:14:09.526 "zerocopy_threshold": 0, 00:14:09.526 "tls_version": 0, 00:14:09.526 "enable_ktls": false 00:14:09.526 } 00:14:09.526 } 00:14:09.526 ] 00:14:09.526 }, 00:14:09.526 { 00:14:09.526 "subsystem": "vmd", 00:14:09.526 "config": [] 00:14:09.526 }, 00:14:09.526 { 00:14:09.526 "subsystem": "accel", 00:14:09.526 "config": [ 00:14:09.526 { 00:14:09.526 "method": "accel_set_options", 00:14:09.526 "params": { 00:14:09.526 "small_cache_size": 128, 00:14:09.526 "large_cache_size": 16, 00:14:09.526 "task_count": 2048, 00:14:09.526 "sequence_count": 2048, 00:14:09.526 "buf_count": 2048 00:14:09.526 } 00:14:09.526 } 00:14:09.526 ] 00:14:09.526 }, 00:14:09.527 { 00:14:09.527 "subsystem": "bdev", 00:14:09.527 "config": [ 00:14:09.527 { 00:14:09.527 "method": "bdev_set_options", 00:14:09.527 "params": { 00:14:09.527 "bdev_io_pool_size": 65535, 00:14:09.527 "bdev_io_cache_size": 256, 00:14:09.527 "bdev_auto_examine": true, 00:14:09.527 "iobuf_small_cache_size": 128, 00:14:09.527 "iobuf_large_cache_size": 16 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "bdev_raid_set_options", 00:14:09.527 "params": { 00:14:09.527 "process_window_size_kb": 1024, 00:14:09.527 "process_max_bandwidth_mb_sec": 0 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "bdev_iscsi_set_options", 00:14:09.527 "params": { 00:14:09.527 "timeout_sec": 30 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "bdev_nvme_set_options", 00:14:09.527 "params": { 00:14:09.527 "action_on_timeout": "none", 00:14:09.527 "timeout_us": 0, 00:14:09.527 "timeout_admin_us": 0, 00:14:09.527 "keep_alive_timeout_ms": 10000, 00:14:09.527 "arbitration_burst": 0, 00:14:09.527 "low_priority_weight": 0, 00:14:09.527 "medium_priority_weight": 0, 00:14:09.527 "high_priority_weight": 0, 00:14:09.527 "nvme_adminq_poll_period_us": 10000, 00:14:09.527 "nvme_ioq_poll_period_us": 0, 00:14:09.527 "io_queue_requests": 0, 00:14:09.527 "delay_cmd_submit": true, 00:14:09.527 "transport_retry_count": 4, 00:14:09.527 "bdev_retry_count": 3, 00:14:09.527 "transport_ack_timeout": 0, 00:14:09.527 "ctrlr_loss_timeout_sec": 0, 00:14:09.527 "reconnect_delay_sec": 0, 00:14:09.527 "fast_io_fail_timeout_sec": 0, 00:14:09.527 "disable_auto_failback": false, 00:14:09.527 "generate_uuids": false, 00:14:09.527 "transport_tos": 0, 00:14:09.527 "nvme_error_stat": false, 00:14:09.527 "rdma_srq_size": 0, 00:14:09.527 "io_path_stat": false, 00:14:09.527 "allow_accel_sequence": false, 00:14:09.527 "rdma_max_cq_size": 0, 00:14:09.527 "rdma_cm_event_timeout_ms": 0, 00:14:09.527 "dhchap_digests": [ 00:14:09.527 "sha256", 00:14:09.527 "sha384", 00:14:09.527 "sha512" 00:14:09.527 ], 00:14:09.527 "dhchap_dhgroups": [ 00:14:09.527 "null", 00:14:09.527 "ffdhe2048", 00:14:09.527 "ffdhe3072", 00:14:09.527 "ffdhe4096", 00:14:09.527 "ffdhe6144", 00:14:09.527 "ffdhe8192" 00:14:09.527 ] 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "bdev_nvme_set_hotplug", 00:14:09.527 "params": { 00:14:09.527 "period_us": 100000, 00:14:09.527 "enable": false 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "bdev_malloc_create", 00:14:09.527 "params": { 00:14:09.527 "name": "malloc0", 00:14:09.527 "num_blocks": 8192, 00:14:09.527 "block_size": 4096, 00:14:09.527 "physical_block_size": 4096, 00:14:09.527 "uuid": "c8104c14-b38a-4ce5-bfb5-9f78344ebdf7", 00:14:09.527 "optimal_io_boundary": 0, 00:14:09.527 "md_size": 0, 00:14:09.527 "dif_type": 0, 00:14:09.527 "dif_is_head_of_md": false, 00:14:09.527 "dif_pi_format": 0 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "bdev_wait_for_examine" 00:14:09.527 } 00:14:09.527 ] 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "subsystem": "nbd", 00:14:09.527 "config": [] 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "subsystem": "scheduler", 00:14:09.527 "config": [ 00:14:09.527 { 00:14:09.527 "method": "framework_set_scheduler", 00:14:09.527 "params": { 00:14:09.527 "name": "static" 00:14:09.527 } 00:14:09.527 } 00:14:09.527 ] 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "subsystem": "nvmf", 00:14:09.527 "config": [ 00:14:09.527 { 00:14:09.527 "method": "nvmf_set_config", 00:14:09.527 "params": { 00:14:09.527 "discovery_filter": "match_any", 00:14:09.527 "admin_cmd_passthru": { 00:14:09.527 "identify_ctrlr": false 00:14:09.527 }, 00:14:09.527 "dhchap_digests": [ 00:14:09.527 "sha256", 00:14:09.527 "sha384", 00:14:09.527 "sha512" 00:14:09.527 ], 00:14:09.527 "dhchap_dhgroups": [ 00:14:09.527 "null", 00:14:09.527 "ffdhe2048", 00:14:09.527 "ffdhe3072", 00:14:09.527 "ffdhe4096", 00:14:09.527 "ffdhe6144", 00:14:09.527 "ffdhe8192" 00:14:09.527 ] 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "nvmf_set_max_subsystems", 00:14:09.527 "params": { 00:14:09.527 "max_subsystems": 1024 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "nvmf_set_crdt", 00:14:09.527 "params": { 00:14:09.527 "crdt1": 0, 00:14:09.527 "crdt2": 0, 00:14:09.527 "crdt3": 0 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "nvmf_create_transport", 00:14:09.527 "params": { 00:14:09.527 "trtype": "TCP", 00:14:09.527 "max_queue_depth": 128, 00:14:09.527 "max_io_qpairs_per_ctrlr": 127, 00:14:09.527 "in_capsule_data_size": 4096, 00:14:09.527 "max_io_size": 131072, 00:14:09.527 "io_unit_size": 131072, 00:14:09.527 "max_aq_depth": 128, 00:14:09.527 "num_shared_buffers": 511, 00:14:09.527 "buf_cache_size": 4294967295, 00:14:09.527 "dif_insert_or_strip": false, 00:14:09.527 "zcopy": false, 00:14:09.527 "c2h_success": false, 00:14:09.527 "sock_priority": 0, 00:14:09.527 "abort_timeout_sec": 1, 00:14:09.527 "ack_timeout": 0, 00:14:09.527 "data_wr_pool_size": 0 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "nvmf_create_subsystem", 00:14:09.527 "params": { 00:14:09.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.527 "allow_any_host": false, 00:14:09.527 "serial_number": "SPDK00000000000001", 00:14:09.527 "model_number": "SPDK bdev Controller", 00:14:09.527 "max_namespaces": 10, 00:14:09.527 "min_cntlid": 1, 00:14:09.527 "max_cntlid": 65519, 00:14:09.527 "ana_reporting": false 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "nvmf_subsystem_add_host", 00:14:09.527 "params": { 00:14:09.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.527 "host": "nqn.2016-06.io.spdk:host1", 00:14:09.527 "psk": "key0" 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "nvmf_subsystem_add_ns", 00:14:09.527 "params": { 00:14:09.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.527 "namespace": { 00:14:09.527 "nsid": 1, 00:14:09.527 "bdev_name": "malloc0", 00:14:09.527 "nguid": "C8104C14B38A4CE5BFB59F78344EBDF7", 00:14:09.527 "uuid": "c8104c14-b38a-4ce5-bfb5-9f78344ebdf7", 00:14:09.527 "no_auto_visible": false 00:14:09.527 } 00:14:09.527 } 00:14:09.527 }, 00:14:09.527 { 00:14:09.527 "method": "nvmf_subsystem_add_listener", 00:14:09.527 "params": { 00:14:09.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.527 "listen_address": { 00:14:09.527 "trtype": "TCP", 00:14:09.527 "adrfam": "IPv4", 00:14:09.527 "traddr": "10.0.0.3", 00:14:09.527 "trsvcid": "4420" 00:14:09.527 }, 00:14:09.527 "secure_channel": true 00:14:09.527 } 00:14:09.527 } 00:14:09.527 ] 00:14:09.527 } 00:14:09.527 ] 00:14:09.527 }' 00:14:09.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.527 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71885 00:14:09.527 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71885 00:14:09.527 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71885 ']' 00:14:09.527 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.527 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:09.527 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.527 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.528 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.528 10:57:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.528 [2024-11-15 10:57:56.247392] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:09.528 [2024-11-15 10:57:56.247543] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.786 [2024-11-15 10:57:56.391449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.786 [2024-11-15 10:57:56.445336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.786 [2024-11-15 10:57:56.445412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.786 [2024-11-15 10:57:56.445424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.786 [2024-11-15 10:57:56.445433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.786 [2024-11-15 10:57:56.445440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.786 [2024-11-15 10:57:56.445943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.786 [2024-11-15 10:57:56.630551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.046 [2024-11-15 10:57:56.721191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.046 [2024-11-15 10:57:56.753137] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:10.046 [2024-11-15 10:57:56.753370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:10.305 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.305 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:10.305 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.305 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.305 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.565 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.565 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71917 00:14:10.565 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71917 /var/tmp/bdevperf.sock 00:14:10.565 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71917 ']' 00:14:10.565 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.565 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.565 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.565 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.565 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.565 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:10.565 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:10.565 "subsystems": [ 00:14:10.565 { 00:14:10.565 "subsystem": "keyring", 00:14:10.565 "config": [ 00:14:10.565 { 00:14:10.565 "method": "keyring_file_add_key", 00:14:10.565 "params": { 00:14:10.565 "name": "key0", 00:14:10.565 "path": "/tmp/tmp.cv7zr8cbIf" 00:14:10.565 } 00:14:10.565 } 00:14:10.565 ] 00:14:10.565 }, 00:14:10.565 { 00:14:10.565 "subsystem": "iobuf", 00:14:10.565 "config": [ 00:14:10.565 { 00:14:10.565 "method": "iobuf_set_options", 00:14:10.565 "params": { 00:14:10.565 "small_pool_count": 8192, 00:14:10.565 "large_pool_count": 1024, 00:14:10.565 "small_bufsize": 8192, 00:14:10.565 "large_bufsize": 135168, 00:14:10.565 "enable_numa": false 00:14:10.565 } 00:14:10.565 } 00:14:10.565 ] 00:14:10.565 }, 00:14:10.565 { 00:14:10.565 "subsystem": "sock", 00:14:10.565 "config": [ 00:14:10.565 { 00:14:10.565 "method": "sock_set_default_impl", 00:14:10.565 "params": { 00:14:10.565 "impl_name": "uring" 00:14:10.565 } 00:14:10.565 }, 00:14:10.565 { 00:14:10.565 "method": "sock_impl_set_options", 00:14:10.565 "params": { 00:14:10.565 "impl_name": "ssl", 00:14:10.565 "recv_buf_size": 4096, 00:14:10.565 "send_buf_size": 4096, 00:14:10.565 "enable_recv_pipe": true, 00:14:10.565 "enable_quickack": false, 00:14:10.565 "enable_placement_id": 0, 00:14:10.565 "enable_zerocopy_send_server": true, 00:14:10.565 "enable_zerocopy_send_client": false, 00:14:10.565 "zerocopy_threshold": 0, 00:14:10.565 "tls_version": 0, 00:14:10.565 "enable_ktls": false 00:14:10.565 } 00:14:10.565 }, 00:14:10.565 { 00:14:10.565 "method": "sock_impl_set_options", 00:14:10.565 "params": { 00:14:10.565 "impl_name": "posix", 00:14:10.565 "recv_buf_size": 2097152, 00:14:10.565 "send_buf_size": 2097152, 00:14:10.565 "enable_recv_pipe": true, 00:14:10.565 "enable_quickack": false, 00:14:10.565 "enable_placement_id": 0, 00:14:10.565 "enable_zerocopy_send_server": true, 00:14:10.565 "enable_zerocopy_send_client": false, 00:14:10.565 "zerocopy_threshold": 0, 00:14:10.565 "tls_version": 0, 00:14:10.565 "enable_ktls": false 00:14:10.565 } 00:14:10.565 }, 00:14:10.565 { 00:14:10.565 "method": "sock_impl_set_options", 00:14:10.565 "params": { 00:14:10.565 "impl_name": "uring", 00:14:10.565 "recv_buf_size": 2097152, 00:14:10.565 "send_buf_size": 2097152, 00:14:10.565 "enable_recv_pipe": true, 00:14:10.565 "enable_quickack": false, 00:14:10.565 "enable_placement_id": 0, 00:14:10.565 "enable_zerocopy_send_server": false, 00:14:10.565 "enable_zerocopy_send_client": false, 00:14:10.566 "zerocopy_threshold": 0, 00:14:10.566 "tls_version": 0, 00:14:10.566 "enable_ktls": false 00:14:10.566 } 00:14:10.566 } 00:14:10.566 ] 00:14:10.566 }, 00:14:10.566 { 00:14:10.566 "subsystem": "vmd", 00:14:10.566 "config": [] 00:14:10.566 }, 00:14:10.566 { 00:14:10.566 "subsystem": "accel", 00:14:10.566 "config": [ 00:14:10.566 { 00:14:10.566 "method": "accel_set_options", 00:14:10.566 "params": { 00:14:10.566 "small_cache_size": 128, 00:14:10.566 "large_cache_size": 16, 00:14:10.566 "task_count": 2048, 00:14:10.566 "sequence_count": 2048, 00:14:10.566 "buf_count": 2048 00:14:10.566 } 00:14:10.566 } 00:14:10.566 ] 00:14:10.566 }, 00:14:10.566 { 00:14:10.566 "subsystem": "bdev", 00:14:10.566 "config": [ 00:14:10.566 { 00:14:10.566 "method": "bdev_set_options", 00:14:10.566 "params": { 00:14:10.566 "bdev_io_pool_size": 65535, 00:14:10.566 "bdev_io_cache_size": 256, 00:14:10.566 "bdev_auto_examine": true, 00:14:10.566 "iobuf_small_cache_size": 128, 00:14:10.566 "iobuf_large_cache_size": 16 00:14:10.566 } 00:14:10.566 }, 00:14:10.566 { 00:14:10.566 "method": "bdev_raid_set_options", 00:14:10.566 "params": { 00:14:10.566 "process_window_size_kb": 1024, 00:14:10.566 "process_max_bandwidth_mb_sec": 0 00:14:10.566 } 00:14:10.566 }, 00:14:10.566 { 00:14:10.566 "method": "bdev_iscsi_set_options", 00:14:10.566 "params": { 00:14:10.566 "timeout_sec": 30 00:14:10.566 } 00:14:10.566 }, 00:14:10.566 { 00:14:10.566 "method": "bdev_nvme_set_options", 00:14:10.566 "params": { 00:14:10.566 "action_on_timeout": "none", 00:14:10.566 "timeout_us": 0, 00:14:10.566 "timeout_admin_us": 0, 00:14:10.566 "keep_alive_timeout_ms": 10000, 00:14:10.566 "arbitration_burst": 0, 00:14:10.566 "low_priority_weight": 0, 00:14:10.566 "medium_priority_weight": 0, 00:14:10.566 "high_priority_weight": 0, 00:14:10.566 "nvme_adminq_poll_period_us": 10000, 00:14:10.566 "nvme_ioq_poll_period_us": 0, 00:14:10.566 "io_queue_requests": 512, 00:14:10.566 "delay_cmd_submit": true, 00:14:10.566 "transport_retry_count": 4, 00:14:10.566 "bdev_retry_count": 3, 00:14:10.566 "transport_ack_timeout": 0, 00:14:10.566 "ctrlr_loss_timeout_sec": 0, 00:14:10.566 "reconnect_delay_sec": 0, 00:14:10.566 "fast_io_fail_timeout_sec": 0, 00:14:10.566 "disable_auto_failback": false, 00:14:10.566 "generate_uuids": false, 00:14:10.566 "transport_tos": 0, 00:14:10.566 "nvme_error_stat": false, 00:14:10.566 "rdma_srq_size": 0, 00:14:10.566 "io_path_stat": false, 00:14:10.566 "allow_accel_sequence": false, 00:14:10.566 "rdma_max_cq_size": 0, 00:14:10.566 "rdma_cm_event_timeout_ms": 0, 00:14:10.566 "dhchap_digests": [ 00:14:10.566 "sha256", 00:14:10.566 "sha384", 00:14:10.566 "sha512" 00:14:10.566 ], 00:14:10.566 "dhchap_dhgroups": [ 00:14:10.566 "null", 00:14:10.566 "ffdhe2048", 00:14:10.566 "ffdhe3072", 00:14:10.566 "ffdhe4096", 00:14:10.566 "ffdhe6144", 00:14:10.566 "ffdhe8192" 00:14:10.566 ] 00:14:10.566 } 00:14:10.566 }, 00:14:10.566 { 00:14:10.566 "method": "bdev_nvme_attach_controller", 00:14:10.566 "params": { 00:14:10.566 "name": "TLSTEST", 00:14:10.566 "trtype": "TCP", 00:14:10.566 "adrfam": "IPv4", 00:14:10.566 "traddr": "10.0.0.3", 00:14:10.566 "trsvcid": "4420", 00:14:10.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.566 "prchk_reftag": false, 00:14:10.566 "prchk_guard": false, 00:14:10.566 "ctrlr_loss_timeout_sec": 0, 00:14:10.566 "reconnect_delay_sec": 0, 00:14:10.566 "fast_io_fail_timeout_sec": 0, 00:14:10.566 "psk": "key0", 00:14:10.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.566 "hdgst": false, 00:14:10.566 "ddgst": false, 00:14:10.566 "multipath": "multipath" 00:14:10.566 } 00:14:10.566 }, 00:14:10.566 { 00:14:10.566 "method": "bdev_nvme_set_hotplug", 00:14:10.566 "params": { 00:14:10.566 "period_us": 100000, 00:14:10.566 "enable": false 00:14:10.566 } 00:14:10.566 }, 00:14:10.566 { 00:14:10.566 "method": "bdev_wait_for_examine" 00:14:10.566 } 00:14:10.566 ] 00:14:10.566 }, 00:14:10.566 { 00:14:10.566 "subsystem": "nbd", 00:14:10.566 "config": [] 00:14:10.566 } 00:14:10.566 ] 00:14:10.566 }' 00:14:10.566 [2024-11-15 10:57:57.247642] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:10.566 [2024-11-15 10:57:57.248300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71917 ] 00:14:10.566 [2024-11-15 10:57:57.391265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.825 [2024-11-15 10:57:57.442545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.825 [2024-11-15 10:57:57.593037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.825 [2024-11-15 10:57:57.648134] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.392 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.392 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:11.392 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:11.652 Running I/O for 10 seconds... 00:14:13.534 3968.00 IOPS, 15.50 MiB/s [2024-11-15T10:58:01.331Z] 3983.50 IOPS, 15.56 MiB/s [2024-11-15T10:58:02.709Z] 4017.67 IOPS, 15.69 MiB/s [2024-11-15T10:58:03.645Z] 4032.00 IOPS, 15.75 MiB/s [2024-11-15T10:58:04.586Z] 4044.80 IOPS, 15.80 MiB/s [2024-11-15T10:58:05.536Z] 4068.17 IOPS, 15.89 MiB/s [2024-11-15T10:58:06.473Z] 4096.29 IOPS, 16.00 MiB/s [2024-11-15T10:58:07.411Z] 4128.00 IOPS, 16.12 MiB/s [2024-11-15T10:58:08.348Z] 4148.89 IOPS, 16.21 MiB/s [2024-11-15T10:58:08.607Z] 4169.40 IOPS, 16.29 MiB/s 00:14:21.746 Latency(us) 00:14:21.746 [2024-11-15T10:58:08.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.746 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:21.746 Verification LBA range: start 0x0 length 0x2000 00:14:21.746 TLSTESTn1 : 10.03 4170.46 16.29 0.00 0.00 30627.44 9830.40 23950.43 00:14:21.746 [2024-11-15T10:58:08.607Z] =================================================================================================================== 00:14:21.746 [2024-11-15T10:58:08.607Z] Total : 4170.46 16.29 0.00 0.00 30627.44 9830.40 23950.43 00:14:21.746 { 00:14:21.746 "results": [ 00:14:21.746 { 00:14:21.746 "job": "TLSTESTn1", 00:14:21.746 "core_mask": "0x4", 00:14:21.746 "workload": "verify", 00:14:21.746 "status": "finished", 00:14:21.746 "verify_range": { 00:14:21.746 "start": 0, 00:14:21.746 "length": 8192 00:14:21.746 }, 00:14:21.746 "queue_depth": 128, 00:14:21.746 "io_size": 4096, 00:14:21.746 "runtime": 10.027429, 00:14:21.746 "iops": 4170.460842953862, 00:14:21.746 "mibps": 16.290862667788524, 00:14:21.746 "io_failed": 0, 00:14:21.746 "io_timeout": 0, 00:14:21.746 "avg_latency_us": 30627.43523852794, 00:14:21.746 "min_latency_us": 9830.4, 00:14:21.746 "max_latency_us": 23950.429090909092 00:14:21.746 } 00:14:21.746 ], 00:14:21.746 "core_count": 1 00:14:21.746 } 00:14:21.746 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:21.746 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71917 00:14:21.747 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71917 ']' 00:14:21.747 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71917 00:14:21.747 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:21.747 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.747 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71917 00:14:21.747 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:21.747 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:21.747 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71917' 00:14:21.747 killing process with pid 71917 00:14:21.747 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71917 00:14:21.747 Received shutdown signal, test time was about 10.000000 seconds 00:14:21.747 00:14:21.747 Latency(us) 00:14:21.747 [2024-11-15T10:58:08.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.747 [2024-11-15T10:58:08.608Z] =================================================================================================================== 00:14:21.747 [2024-11-15T10:58:08.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:21.747 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71917 00:14:22.006 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71885 00:14:22.006 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71885 ']' 00:14:22.006 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71885 00:14:22.006 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:22.006 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.006 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71885 00:14:22.006 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:22.006 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:22.006 killing process with pid 71885 00:14:22.006 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71885' 00:14:22.006 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71885 00:14:22.006 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71885 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72050 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72050 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72050 ']' 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.265 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.265 [2024-11-15 10:58:08.932023] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:22.265 [2024-11-15 10:58:08.932141] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.265 [2024-11-15 10:58:09.077117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.524 [2024-11-15 10:58:09.130719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.524 [2024-11-15 10:58:09.130796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.524 [2024-11-15 10:58:09.130818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.524 [2024-11-15 10:58:09.130829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.524 [2024-11-15 10:58:09.130843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.524 [2024-11-15 10:58:09.131301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.524 [2024-11-15 10:58:09.188005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:22.524 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.524 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:22.524 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:22.524 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:22.524 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.524 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.524 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.cv7zr8cbIf 00:14:22.524 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.cv7zr8cbIf 00:14:22.524 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:22.783 [2024-11-15 10:58:09.524453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.783 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:23.041 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:23.299 [2024-11-15 10:58:10.084567] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:23.299 [2024-11-15 10:58:10.084825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:23.299 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:23.558 malloc0 00:14:23.558 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:23.817 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.cv7zr8cbIf 00:14:24.076 10:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:24.335 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72098 00:14:24.335 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:24.335 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:24.335 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72098 /var/tmp/bdevperf.sock 00:14:24.335 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72098 ']' 00:14:24.335 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:24.335 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:24.335 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:24.335 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.335 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.335 [2024-11-15 10:58:11.061164] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:24.335 [2024-11-15 10:58:11.061259] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72098 ] 00:14:24.593 [2024-11-15 10:58:11.207831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.594 [2024-11-15 10:58:11.267328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.594 [2024-11-15 10:58:11.324891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:25.530 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.530 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:25.530 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cv7zr8cbIf 00:14:25.530 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:25.787 [2024-11-15 10:58:12.526112] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:25.787 nvme0n1 00:14:25.787 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:26.045 Running I/O for 1 seconds... 00:14:26.983 4460.00 IOPS, 17.42 MiB/s 00:14:26.983 Latency(us) 00:14:26.983 [2024-11-15T10:58:13.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.983 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:26.983 Verification LBA range: start 0x0 length 0x2000 00:14:26.983 nvme0n1 : 1.02 4513.18 17.63 0.00 0.00 28099.37 6166.34 21924.77 00:14:26.983 [2024-11-15T10:58:13.844Z] =================================================================================================================== 00:14:26.983 [2024-11-15T10:58:13.844Z] Total : 4513.18 17.63 0.00 0.00 28099.37 6166.34 21924.77 00:14:26.983 { 00:14:26.983 "results": [ 00:14:26.983 { 00:14:26.983 "job": "nvme0n1", 00:14:26.983 "core_mask": "0x2", 00:14:26.984 "workload": "verify", 00:14:26.984 "status": "finished", 00:14:26.984 "verify_range": { 00:14:26.984 "start": 0, 00:14:26.984 "length": 8192 00:14:26.984 }, 00:14:26.984 "queue_depth": 128, 00:14:26.984 "io_size": 4096, 00:14:26.984 "runtime": 1.016578, 00:14:26.984 "iops": 4513.180493774211, 00:14:26.984 "mibps": 17.62961130380551, 00:14:26.984 "io_failed": 0, 00:14:26.984 "io_timeout": 0, 00:14:26.984 "avg_latency_us": 28099.368356978677, 00:14:26.984 "min_latency_us": 6166.341818181818, 00:14:26.984 "max_latency_us": 21924.77090909091 00:14:26.984 } 00:14:26.984 ], 00:14:26.984 "core_count": 1 00:14:26.984 } 00:14:26.984 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72098 00:14:26.984 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72098 ']' 00:14:26.984 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72098 00:14:26.984 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:26.984 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.984 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72098 00:14:26.984 killing process with pid 72098 00:14:26.984 Received shutdown signal, test time was about 1.000000 seconds 00:14:26.984 00:14:26.984 Latency(us) 00:14:26.984 [2024-11-15T10:58:13.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.984 [2024-11-15T10:58:13.845Z] =================================================================================================================== 00:14:26.984 [2024-11-15T10:58:13.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.984 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:26.984 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:26.984 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72098' 00:14:26.984 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72098 00:14:26.984 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72098 00:14:27.243 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72050 00:14:27.243 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72050 ']' 00:14:27.243 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72050 00:14:27.243 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:27.243 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.243 10:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72050 00:14:27.243 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.243 killing process with pid 72050 00:14:27.243 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.243 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72050' 00:14:27.243 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72050 00:14:27.243 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72050 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72155 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72155 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72155 ']' 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.502 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.502 [2024-11-15 10:58:14.332640] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:27.502 [2024-11-15 10:58:14.332737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.761 [2024-11-15 10:58:14.479539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.761 [2024-11-15 10:58:14.523094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.761 [2024-11-15 10:58:14.523158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.761 [2024-11-15 10:58:14.523167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.761 [2024-11-15 10:58:14.523179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.761 [2024-11-15 10:58:14.523185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.761 [2024-11-15 10:58:14.523572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.761 [2024-11-15 10:58:14.591004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.020 [2024-11-15 10:58:14.709323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.020 malloc0 00:14:28.020 [2024-11-15 10:58:14.742567] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:28.020 [2024-11-15 10:58:14.742815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72174 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72174 /var/tmp/bdevperf.sock 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72174 ']' 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.020 10:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.020 [2024-11-15 10:58:14.818674] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:28.021 [2024-11-15 10:58:14.818779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72174 ] 00:14:28.279 [2024-11-15 10:58:14.957873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.279 [2024-11-15 10:58:15.011856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.279 [2024-11-15 10:58:15.065583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:28.279 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.279 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:28.279 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cv7zr8cbIf 00:14:28.538 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:28.797 [2024-11-15 10:58:15.567657] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:28.797 nvme0n1 00:14:28.797 10:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:29.056 Running I/O for 1 seconds... 00:14:29.993 4498.00 IOPS, 17.57 MiB/s 00:14:29.993 Latency(us) 00:14:29.993 [2024-11-15T10:58:16.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.993 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:29.993 Verification LBA range: start 0x0 length 0x2000 00:14:29.993 nvme0n1 : 1.01 4560.85 17.82 0.00 0.00 27856.15 4885.41 22163.08 00:14:29.993 [2024-11-15T10:58:16.854Z] =================================================================================================================== 00:14:29.993 [2024-11-15T10:58:16.854Z] Total : 4560.85 17.82 0.00 0.00 27856.15 4885.41 22163.08 00:14:29.993 { 00:14:29.993 "results": [ 00:14:29.993 { 00:14:29.993 "job": "nvme0n1", 00:14:29.993 "core_mask": "0x2", 00:14:29.993 "workload": "verify", 00:14:29.993 "status": "finished", 00:14:29.993 "verify_range": { 00:14:29.993 "start": 0, 00:14:29.993 "length": 8192 00:14:29.993 }, 00:14:29.993 "queue_depth": 128, 00:14:29.993 "io_size": 4096, 00:14:29.993 "runtime": 1.014284, 00:14:29.993 "iops": 4560.852778906105, 00:14:29.993 "mibps": 17.815831167601974, 00:14:29.993 "io_failed": 0, 00:14:29.993 "io_timeout": 0, 00:14:29.993 "avg_latency_us": 27856.15497543529, 00:14:29.993 "min_latency_us": 4885.410909090909, 00:14:29.993 "max_latency_us": 22163.083636363637 00:14:29.993 } 00:14:29.993 ], 00:14:29.993 "core_count": 1 00:14:29.993 } 00:14:29.993 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:29.993 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.993 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.252 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.252 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:30.252 "subsystems": [ 00:14:30.252 { 00:14:30.252 "subsystem": "keyring", 00:14:30.252 "config": [ 00:14:30.252 { 00:14:30.252 "method": "keyring_file_add_key", 00:14:30.252 "params": { 00:14:30.252 "name": "key0", 00:14:30.252 "path": "/tmp/tmp.cv7zr8cbIf" 00:14:30.252 } 00:14:30.252 } 00:14:30.252 ] 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "subsystem": "iobuf", 00:14:30.252 "config": [ 00:14:30.252 { 00:14:30.252 "method": "iobuf_set_options", 00:14:30.252 "params": { 00:14:30.252 "small_pool_count": 8192, 00:14:30.252 "large_pool_count": 1024, 00:14:30.252 "small_bufsize": 8192, 00:14:30.252 "large_bufsize": 135168, 00:14:30.252 "enable_numa": false 00:14:30.252 } 00:14:30.252 } 00:14:30.252 ] 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "subsystem": "sock", 00:14:30.252 "config": [ 00:14:30.252 { 00:14:30.252 "method": "sock_set_default_impl", 00:14:30.252 "params": { 00:14:30.252 "impl_name": "uring" 00:14:30.252 } 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "method": "sock_impl_set_options", 00:14:30.252 "params": { 00:14:30.252 "impl_name": "ssl", 00:14:30.252 "recv_buf_size": 4096, 00:14:30.252 "send_buf_size": 4096, 00:14:30.252 "enable_recv_pipe": true, 00:14:30.252 "enable_quickack": false, 00:14:30.252 "enable_placement_id": 0, 00:14:30.252 "enable_zerocopy_send_server": true, 00:14:30.252 "enable_zerocopy_send_client": false, 00:14:30.252 "zerocopy_threshold": 0, 00:14:30.252 "tls_version": 0, 00:14:30.252 "enable_ktls": false 00:14:30.252 } 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "method": "sock_impl_set_options", 00:14:30.252 "params": { 00:14:30.252 "impl_name": "posix", 00:14:30.252 "recv_buf_size": 2097152, 00:14:30.252 "send_buf_size": 2097152, 00:14:30.252 "enable_recv_pipe": true, 00:14:30.252 "enable_quickack": false, 00:14:30.252 "enable_placement_id": 0, 00:14:30.252 "enable_zerocopy_send_server": true, 00:14:30.252 "enable_zerocopy_send_client": false, 00:14:30.252 "zerocopy_threshold": 0, 00:14:30.252 "tls_version": 0, 00:14:30.252 "enable_ktls": false 00:14:30.252 } 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "method": "sock_impl_set_options", 00:14:30.252 "params": { 00:14:30.252 "impl_name": "uring", 00:14:30.252 "recv_buf_size": 2097152, 00:14:30.252 "send_buf_size": 2097152, 00:14:30.252 "enable_recv_pipe": true, 00:14:30.252 "enable_quickack": false, 00:14:30.252 "enable_placement_id": 0, 00:14:30.252 "enable_zerocopy_send_server": false, 00:14:30.252 "enable_zerocopy_send_client": false, 00:14:30.252 "zerocopy_threshold": 0, 00:14:30.252 "tls_version": 0, 00:14:30.252 "enable_ktls": false 00:14:30.252 } 00:14:30.252 } 00:14:30.252 ] 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "subsystem": "vmd", 00:14:30.252 "config": [] 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "subsystem": "accel", 00:14:30.252 "config": [ 00:14:30.252 { 00:14:30.252 "method": "accel_set_options", 00:14:30.252 "params": { 00:14:30.252 "small_cache_size": 128, 00:14:30.252 "large_cache_size": 16, 00:14:30.252 "task_count": 2048, 00:14:30.252 "sequence_count": 2048, 00:14:30.252 "buf_count": 2048 00:14:30.252 } 00:14:30.252 } 00:14:30.252 ] 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "subsystem": "bdev", 00:14:30.252 "config": [ 00:14:30.252 { 00:14:30.252 "method": "bdev_set_options", 00:14:30.252 "params": { 00:14:30.252 "bdev_io_pool_size": 65535, 00:14:30.252 "bdev_io_cache_size": 256, 00:14:30.252 "bdev_auto_examine": true, 00:14:30.252 "iobuf_small_cache_size": 128, 00:14:30.252 "iobuf_large_cache_size": 16 00:14:30.252 } 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "method": "bdev_raid_set_options", 00:14:30.252 "params": { 00:14:30.252 "process_window_size_kb": 1024, 00:14:30.252 "process_max_bandwidth_mb_sec": 0 00:14:30.252 } 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "method": "bdev_iscsi_set_options", 00:14:30.252 "params": { 00:14:30.252 "timeout_sec": 30 00:14:30.252 } 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "method": "bdev_nvme_set_options", 00:14:30.252 "params": { 00:14:30.252 "action_on_timeout": "none", 00:14:30.252 "timeout_us": 0, 00:14:30.252 "timeout_admin_us": 0, 00:14:30.252 "keep_alive_timeout_ms": 10000, 00:14:30.252 "arbitration_burst": 0, 00:14:30.252 "low_priority_weight": 0, 00:14:30.252 "medium_priority_weight": 0, 00:14:30.252 "high_priority_weight": 0, 00:14:30.252 "nvme_adminq_poll_period_us": 10000, 00:14:30.252 "nvme_ioq_poll_period_us": 0, 00:14:30.252 "io_queue_requests": 0, 00:14:30.252 "delay_cmd_submit": true, 00:14:30.252 "transport_retry_count": 4, 00:14:30.252 "bdev_retry_count": 3, 00:14:30.252 "transport_ack_timeout": 0, 00:14:30.252 "ctrlr_loss_timeout_sec": 0, 00:14:30.252 "reconnect_delay_sec": 0, 00:14:30.252 "fast_io_fail_timeout_sec": 0, 00:14:30.252 "disable_auto_failback": false, 00:14:30.252 "generate_uuids": false, 00:14:30.252 "transport_tos": 0, 00:14:30.252 "nvme_error_stat": false, 00:14:30.252 "rdma_srq_size": 0, 00:14:30.252 "io_path_stat": false, 00:14:30.252 "allow_accel_sequence": false, 00:14:30.252 "rdma_max_cq_size": 0, 00:14:30.252 "rdma_cm_event_timeout_ms": 0, 00:14:30.252 "dhchap_digests": [ 00:14:30.252 "sha256", 00:14:30.252 "sha384", 00:14:30.252 "sha512" 00:14:30.252 ], 00:14:30.252 "dhchap_dhgroups": [ 00:14:30.252 "null", 00:14:30.252 "ffdhe2048", 00:14:30.252 "ffdhe3072", 00:14:30.252 "ffdhe4096", 00:14:30.252 "ffdhe6144", 00:14:30.252 "ffdhe8192" 00:14:30.252 ] 00:14:30.252 } 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "method": "bdev_nvme_set_hotplug", 00:14:30.252 "params": { 00:14:30.252 "period_us": 100000, 00:14:30.252 "enable": false 00:14:30.252 } 00:14:30.252 }, 00:14:30.252 { 00:14:30.252 "method": "bdev_malloc_create", 00:14:30.252 "params": { 00:14:30.252 "name": "malloc0", 00:14:30.252 "num_blocks": 8192, 00:14:30.252 "block_size": 4096, 00:14:30.252 "physical_block_size": 4096, 00:14:30.252 "uuid": "2e7ef916-d165-4e36-8cac-7f0b8b9eda31", 00:14:30.252 "optimal_io_boundary": 0, 00:14:30.252 "md_size": 0, 00:14:30.252 "dif_type": 0, 00:14:30.253 "dif_is_head_of_md": false, 00:14:30.253 "dif_pi_format": 0 00:14:30.253 } 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "method": "bdev_wait_for_examine" 00:14:30.253 } 00:14:30.253 ] 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "subsystem": "nbd", 00:14:30.253 "config": [] 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "subsystem": "scheduler", 00:14:30.253 "config": [ 00:14:30.253 { 00:14:30.253 "method": "framework_set_scheduler", 00:14:30.253 "params": { 00:14:30.253 "name": "static" 00:14:30.253 } 00:14:30.253 } 00:14:30.253 ] 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "subsystem": "nvmf", 00:14:30.253 "config": [ 00:14:30.253 { 00:14:30.253 "method": "nvmf_set_config", 00:14:30.253 "params": { 00:14:30.253 "discovery_filter": "match_any", 00:14:30.253 "admin_cmd_passthru": { 00:14:30.253 "identify_ctrlr": false 00:14:30.253 }, 00:14:30.253 "dhchap_digests": [ 00:14:30.253 "sha256", 00:14:30.253 "sha384", 00:14:30.253 "sha512" 00:14:30.253 ], 00:14:30.253 "dhchap_dhgroups": [ 00:14:30.253 "null", 00:14:30.253 "ffdhe2048", 00:14:30.253 "ffdhe3072", 00:14:30.253 "ffdhe4096", 00:14:30.253 "ffdhe6144", 00:14:30.253 "ffdhe8192" 00:14:30.253 ] 00:14:30.253 } 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "method": "nvmf_set_max_subsystems", 00:14:30.253 "params": { 00:14:30.253 "max_subsystems": 1024 00:14:30.253 } 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "method": "nvmf_set_crdt", 00:14:30.253 "params": { 00:14:30.253 "crdt1": 0, 00:14:30.253 "crdt2": 0, 00:14:30.253 "crdt3": 0 00:14:30.253 } 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "method": "nvmf_create_transport", 00:14:30.253 "params": { 00:14:30.253 "trtype": "TCP", 00:14:30.253 "max_queue_depth": 128, 00:14:30.253 "max_io_qpairs_per_ctrlr": 127, 00:14:30.253 "in_capsule_data_size": 4096, 00:14:30.253 "max_io_size": 131072, 00:14:30.253 "io_unit_size": 131072, 00:14:30.253 "max_aq_depth": 128, 00:14:30.253 "num_shared_buffers": 511, 00:14:30.253 "buf_cache_size": 4294967295, 00:14:30.253 "dif_insert_or_strip": false, 00:14:30.253 "zcopy": false, 00:14:30.253 "c2h_success": false, 00:14:30.253 "sock_priority": 0, 00:14:30.253 "abort_timeout_sec": 1, 00:14:30.253 "ack_timeout": 0, 00:14:30.253 "data_wr_pool_size": 0 00:14:30.253 } 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "method": "nvmf_create_subsystem", 00:14:30.253 "params": { 00:14:30.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.253 "allow_any_host": false, 00:14:30.253 "serial_number": "00000000000000000000", 00:14:30.253 "model_number": "SPDK bdev Controller", 00:14:30.253 "max_namespaces": 32, 00:14:30.253 "min_cntlid": 1, 00:14:30.253 "max_cntlid": 65519, 00:14:30.253 "ana_reporting": false 00:14:30.253 } 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "method": "nvmf_subsystem_add_host", 00:14:30.253 "params": { 00:14:30.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.253 "host": "nqn.2016-06.io.spdk:host1", 00:14:30.253 "psk": "key0" 00:14:30.253 } 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "method": "nvmf_subsystem_add_ns", 00:14:30.253 "params": { 00:14:30.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.253 "namespace": { 00:14:30.253 "nsid": 1, 00:14:30.253 "bdev_name": "malloc0", 00:14:30.253 "nguid": "2E7EF916D1654E368CAC7F0B8B9EDA31", 00:14:30.253 "uuid": "2e7ef916-d165-4e36-8cac-7f0b8b9eda31", 00:14:30.253 "no_auto_visible": false 00:14:30.253 } 00:14:30.253 } 00:14:30.253 }, 00:14:30.253 { 00:14:30.253 "method": "nvmf_subsystem_add_listener", 00:14:30.253 "params": { 00:14:30.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.253 "listen_address": { 00:14:30.253 "trtype": "TCP", 00:14:30.253 "adrfam": "IPv4", 00:14:30.253 "traddr": "10.0.0.3", 00:14:30.253 "trsvcid": "4420" 00:14:30.253 }, 00:14:30.253 "secure_channel": false, 00:14:30.253 "sock_impl": "ssl" 00:14:30.253 } 00:14:30.253 } 00:14:30.253 ] 00:14:30.253 } 00:14:30.253 ] 00:14:30.253 }' 00:14:30.253 10:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:30.513 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:30.513 "subsystems": [ 00:14:30.513 { 00:14:30.513 "subsystem": "keyring", 00:14:30.513 "config": [ 00:14:30.513 { 00:14:30.513 "method": "keyring_file_add_key", 00:14:30.513 "params": { 00:14:30.513 "name": "key0", 00:14:30.513 "path": "/tmp/tmp.cv7zr8cbIf" 00:14:30.513 } 00:14:30.513 } 00:14:30.513 ] 00:14:30.513 }, 00:14:30.513 { 00:14:30.513 "subsystem": "iobuf", 00:14:30.513 "config": [ 00:14:30.513 { 00:14:30.513 "method": "iobuf_set_options", 00:14:30.513 "params": { 00:14:30.513 "small_pool_count": 8192, 00:14:30.513 "large_pool_count": 1024, 00:14:30.513 "small_bufsize": 8192, 00:14:30.513 "large_bufsize": 135168, 00:14:30.513 "enable_numa": false 00:14:30.513 } 00:14:30.513 } 00:14:30.513 ] 00:14:30.513 }, 00:14:30.513 { 00:14:30.513 "subsystem": "sock", 00:14:30.513 "config": [ 00:14:30.513 { 00:14:30.513 "method": "sock_set_default_impl", 00:14:30.513 "params": { 00:14:30.513 "impl_name": "uring" 00:14:30.513 } 00:14:30.513 }, 00:14:30.513 { 00:14:30.513 "method": "sock_impl_set_options", 00:14:30.513 "params": { 00:14:30.513 "impl_name": "ssl", 00:14:30.513 "recv_buf_size": 4096, 00:14:30.513 "send_buf_size": 4096, 00:14:30.513 "enable_recv_pipe": true, 00:14:30.513 "enable_quickack": false, 00:14:30.513 "enable_placement_id": 0, 00:14:30.513 "enable_zerocopy_send_server": true, 00:14:30.513 "enable_zerocopy_send_client": false, 00:14:30.513 "zerocopy_threshold": 0, 00:14:30.513 "tls_version": 0, 00:14:30.513 "enable_ktls": false 00:14:30.513 } 00:14:30.513 }, 00:14:30.513 { 00:14:30.513 "method": "sock_impl_set_options", 00:14:30.513 "params": { 00:14:30.513 "impl_name": "posix", 00:14:30.513 "recv_buf_size": 2097152, 00:14:30.513 "send_buf_size": 2097152, 00:14:30.513 "enable_recv_pipe": true, 00:14:30.513 "enable_quickack": false, 00:14:30.513 "enable_placement_id": 0, 00:14:30.513 "enable_zerocopy_send_server": true, 00:14:30.514 "enable_zerocopy_send_client": false, 00:14:30.514 "zerocopy_threshold": 0, 00:14:30.514 "tls_version": 0, 00:14:30.514 "enable_ktls": false 00:14:30.514 } 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "method": "sock_impl_set_options", 00:14:30.514 "params": { 00:14:30.514 "impl_name": "uring", 00:14:30.514 "recv_buf_size": 2097152, 00:14:30.514 "send_buf_size": 2097152, 00:14:30.514 "enable_recv_pipe": true, 00:14:30.514 "enable_quickack": false, 00:14:30.514 "enable_placement_id": 0, 00:14:30.514 "enable_zerocopy_send_server": false, 00:14:30.514 "enable_zerocopy_send_client": false, 00:14:30.514 "zerocopy_threshold": 0, 00:14:30.514 "tls_version": 0, 00:14:30.514 "enable_ktls": false 00:14:30.514 } 00:14:30.514 } 00:14:30.514 ] 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "subsystem": "vmd", 00:14:30.514 "config": [] 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "subsystem": "accel", 00:14:30.514 "config": [ 00:14:30.514 { 00:14:30.514 "method": "accel_set_options", 00:14:30.514 "params": { 00:14:30.514 "small_cache_size": 128, 00:14:30.514 "large_cache_size": 16, 00:14:30.514 "task_count": 2048, 00:14:30.514 "sequence_count": 2048, 00:14:30.514 "buf_count": 2048 00:14:30.514 } 00:14:30.514 } 00:14:30.514 ] 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "subsystem": "bdev", 00:14:30.514 "config": [ 00:14:30.514 { 00:14:30.514 "method": "bdev_set_options", 00:14:30.514 "params": { 00:14:30.514 "bdev_io_pool_size": 65535, 00:14:30.514 "bdev_io_cache_size": 256, 00:14:30.514 "bdev_auto_examine": true, 00:14:30.514 "iobuf_small_cache_size": 128, 00:14:30.514 "iobuf_large_cache_size": 16 00:14:30.514 } 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "method": "bdev_raid_set_options", 00:14:30.514 "params": { 00:14:30.514 "process_window_size_kb": 1024, 00:14:30.514 "process_max_bandwidth_mb_sec": 0 00:14:30.514 } 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "method": "bdev_iscsi_set_options", 00:14:30.514 "params": { 00:14:30.514 "timeout_sec": 30 00:14:30.514 } 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "method": "bdev_nvme_set_options", 00:14:30.514 "params": { 00:14:30.514 "action_on_timeout": "none", 00:14:30.514 "timeout_us": 0, 00:14:30.514 "timeout_admin_us": 0, 00:14:30.514 "keep_alive_timeout_ms": 10000, 00:14:30.514 "arbitration_burst": 0, 00:14:30.514 "low_priority_weight": 0, 00:14:30.514 "medium_priority_weight": 0, 00:14:30.514 "high_priority_weight": 0, 00:14:30.514 "nvme_adminq_poll_period_us": 10000, 00:14:30.514 "nvme_ioq_poll_period_us": 0, 00:14:30.514 "io_queue_requests": 512, 00:14:30.514 "delay_cmd_submit": true, 00:14:30.514 "transport_retry_count": 4, 00:14:30.514 "bdev_retry_count": 3, 00:14:30.514 "transport_ack_timeout": 0, 00:14:30.514 "ctrlr_loss_timeout_sec": 0, 00:14:30.514 "reconnect_delay_sec": 0, 00:14:30.514 "fast_io_fail_timeout_sec": 0, 00:14:30.514 "disable_auto_failback": false, 00:14:30.514 "generate_uuids": false, 00:14:30.514 "transport_tos": 0, 00:14:30.514 "nvme_error_stat": false, 00:14:30.514 "rdma_srq_size": 0, 00:14:30.514 "io_path_stat": false, 00:14:30.514 "allow_accel_sequence": false, 00:14:30.514 "rdma_max_cq_size": 0, 00:14:30.514 "rdma_cm_event_timeout_ms": 0, 00:14:30.514 "dhchap_digests": [ 00:14:30.514 "sha256", 00:14:30.514 "sha384", 00:14:30.514 "sha512" 00:14:30.514 ], 00:14:30.514 "dhchap_dhgroups": [ 00:14:30.514 "null", 00:14:30.514 "ffdhe2048", 00:14:30.514 "ffdhe3072", 00:14:30.514 "ffdhe4096", 00:14:30.514 "ffdhe6144", 00:14:30.514 "ffdhe8192" 00:14:30.514 ] 00:14:30.514 } 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "method": "bdev_nvme_attach_controller", 00:14:30.514 "params": { 00:14:30.514 "name": "nvme0", 00:14:30.514 "trtype": "TCP", 00:14:30.514 "adrfam": "IPv4", 00:14:30.514 "traddr": "10.0.0.3", 00:14:30.514 "trsvcid": "4420", 00:14:30.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.514 "prchk_reftag": false, 00:14:30.514 "prchk_guard": false, 00:14:30.514 "ctrlr_loss_timeout_sec": 0, 00:14:30.514 "reconnect_delay_sec": 0, 00:14:30.514 "fast_io_fail_timeout_sec": 0, 00:14:30.514 "psk": "key0", 00:14:30.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:30.514 "hdgst": false, 00:14:30.514 "ddgst": false, 00:14:30.514 "multipath": "multipath" 00:14:30.514 } 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "method": "bdev_nvme_set_hotplug", 00:14:30.514 "params": { 00:14:30.514 "period_us": 100000, 00:14:30.514 "enable": false 00:14:30.514 } 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "method": "bdev_enable_histogram", 00:14:30.514 "params": { 00:14:30.514 "name": "nvme0n1", 00:14:30.514 "enable": true 00:14:30.514 } 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "method": "bdev_wait_for_examine" 00:14:30.514 } 00:14:30.514 ] 00:14:30.514 }, 00:14:30.514 { 00:14:30.514 "subsystem": "nbd", 00:14:30.514 "config": [] 00:14:30.514 } 00:14:30.514 ] 00:14:30.514 }' 00:14:30.514 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72174 00:14:30.514 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72174 ']' 00:14:30.514 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72174 00:14:30.514 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:30.514 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.514 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72174 00:14:30.514 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:30.514 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:30.514 killing process with pid 72174 00:14:30.514 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72174' 00:14:30.514 Received shutdown signal, test time was about 1.000000 seconds 00:14:30.514 00:14:30.514 Latency(us) 00:14:30.514 [2024-11-15T10:58:17.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.514 [2024-11-15T10:58:17.375Z] =================================================================================================================== 00:14:30.514 [2024-11-15T10:58:17.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.514 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72174 00:14:30.514 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72174 00:14:30.773 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72155 00:14:30.773 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72155 ']' 00:14:30.773 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72155 00:14:30.773 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:30.773 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.773 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72155 00:14:30.773 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.773 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.773 killing process with pid 72155 00:14:30.773 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72155' 00:14:30.773 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72155 00:14:30.773 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72155 00:14:31.032 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:31.032 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:31.032 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:31.032 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:31.032 "subsystems": [ 00:14:31.032 { 00:14:31.032 "subsystem": "keyring", 00:14:31.032 "config": [ 00:14:31.032 { 00:14:31.032 "method": "keyring_file_add_key", 00:14:31.032 "params": { 00:14:31.032 "name": "key0", 00:14:31.032 "path": "/tmp/tmp.cv7zr8cbIf" 00:14:31.032 } 00:14:31.032 } 00:14:31.032 ] 00:14:31.032 }, 00:14:31.032 { 00:14:31.032 "subsystem": "iobuf", 00:14:31.032 "config": [ 00:14:31.032 { 00:14:31.032 "method": "iobuf_set_options", 00:14:31.032 "params": { 00:14:31.032 "small_pool_count": 8192, 00:14:31.032 "large_pool_count": 1024, 00:14:31.032 "small_bufsize": 8192, 00:14:31.032 "large_bufsize": 135168, 00:14:31.032 "enable_numa": false 00:14:31.032 } 00:14:31.032 } 00:14:31.032 ] 00:14:31.032 }, 00:14:31.032 { 00:14:31.032 "subsystem": "sock", 00:14:31.032 "config": [ 00:14:31.032 { 00:14:31.032 "method": "sock_set_default_impl", 00:14:31.032 "params": { 00:14:31.032 "impl_name": "uring" 00:14:31.032 } 00:14:31.032 }, 00:14:31.032 { 00:14:31.032 "method": "sock_impl_set_options", 00:14:31.032 "params": { 00:14:31.032 "impl_name": "ssl", 00:14:31.032 "recv_buf_size": 4096, 00:14:31.032 "send_buf_size": 4096, 00:14:31.032 "enable_recv_pipe": true, 00:14:31.032 "enable_quickack": false, 00:14:31.032 "enable_placement_id": 0, 00:14:31.032 "enable_zerocopy_send_server": true, 00:14:31.032 "enable_zerocopy_send_client": false, 00:14:31.032 "zerocopy_threshold": 0, 00:14:31.032 "tls_version": 0, 00:14:31.032 "enable_ktls": false 00:14:31.032 } 00:14:31.032 }, 00:14:31.032 { 00:14:31.032 "method": "sock_impl_set_options", 00:14:31.032 "params": { 00:14:31.032 "impl_name": "posix", 00:14:31.032 "recv_buf_size": 2097152, 00:14:31.032 "send_buf_size": 2097152, 00:14:31.032 "enable_recv_pipe": true, 00:14:31.032 "enable_quickack": false, 00:14:31.032 "enable_placement_id": 0, 00:14:31.032 "enable_zerocopy_send_server": true, 00:14:31.032 "enable_zerocopy_send_client": false, 00:14:31.032 "zerocopy_threshold": 0, 00:14:31.032 "tls_version": 0, 00:14:31.033 "enable_ktls": false 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "sock_impl_set_options", 00:14:31.033 "params": { 00:14:31.033 "impl_name": "uring", 00:14:31.033 "recv_buf_size": 2097152, 00:14:31.033 "send_buf_size": 2097152, 00:14:31.033 "enable_recv_pipe": true, 00:14:31.033 "enable_quickack": false, 00:14:31.033 "enable_placement_id": 0, 00:14:31.033 "enable_zerocopy_send_server": false, 00:14:31.033 "enable_zerocopy_send_client": false, 00:14:31.033 "zerocopy_threshold": 0, 00:14:31.033 "tls_version": 0, 00:14:31.033 "enable_ktls": false 00:14:31.033 } 00:14:31.033 } 00:14:31.033 ] 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "subsystem": "vmd", 00:14:31.033 "config": [] 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "subsystem": "accel", 00:14:31.033 "config": [ 00:14:31.033 { 00:14:31.033 "method": "accel_set_options", 00:14:31.033 "params": { 00:14:31.033 "small_cache_size": 128, 00:14:31.033 "large_cache_size": 16, 00:14:31.033 "task_count": 2048, 00:14:31.033 "sequence_count": 2048, 00:14:31.033 "buf_count": 2048 00:14:31.033 } 00:14:31.033 } 00:14:31.033 ] 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "subsystem": "bdev", 00:14:31.033 "config": [ 00:14:31.033 { 00:14:31.033 "method": "bdev_set_options", 00:14:31.033 "params": { 00:14:31.033 "bdev_io_pool_size": 65535, 00:14:31.033 "bdev_io_cache_size": 256, 00:14:31.033 "bdev_auto_examine": true, 00:14:31.033 "iobuf_small_cache_size": 128, 00:14:31.033 "iobuf_large_cache_size": 16 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "bdev_raid_set_options", 00:14:31.033 "params": { 00:14:31.033 "process_window_size_kb": 1024, 00:14:31.033 "process_max_bandwidth_mb_sec": 0 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "bdev_iscsi_set_options", 00:14:31.033 "params": { 00:14:31.033 "timeout_sec": 30 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "bdev_nvme_set_options", 00:14:31.033 "params": { 00:14:31.033 "action_on_timeout": "none", 00:14:31.033 "timeout_us": 0, 00:14:31.033 "timeout_admin_us": 0, 00:14:31.033 "keep_alive_timeout_ms": 10000, 00:14:31.033 "arbitration_burst": 0, 00:14:31.033 "low_priority_weight": 0, 00:14:31.033 "medium_priority_weight": 0, 00:14:31.033 "high_priority_weight": 0, 00:14:31.033 "nvme_adminq_poll_period_us": 10000, 00:14:31.033 "nvme_ioq_poll_period_us": 0, 00:14:31.033 "io_queue_requests": 0, 00:14:31.033 "delay_cmd_submit": true, 00:14:31.033 "transport_retry_count": 4, 00:14:31.033 "bdev_retry_count": 3, 00:14:31.033 "transport_ack_timeout": 0, 00:14:31.033 "ctrlr_loss_timeout_sec": 0, 00:14:31.033 "reconnect_delay_sec": 0, 00:14:31.033 "fast_io_fail_timeout_sec": 0, 00:14:31.033 "disable_auto_failback": false, 00:14:31.033 "generate_uuids": false, 00:14:31.033 "transport_tos": 0, 00:14:31.033 "nvme_error_stat": false, 00:14:31.033 "rdma_srq_size": 0, 00:14:31.033 "io_path_stat": false, 00:14:31.033 "allow_accel_sequence": false, 00:14:31.033 "rdma_max_cq_size": 0, 00:14:31.033 "rdma_cm_event_timeout_ms": 0, 00:14:31.033 "dhchap_digests": [ 00:14:31.033 "sha256", 00:14:31.033 "sha384", 00:14:31.033 "sha512" 00:14:31.033 ], 00:14:31.033 "dhchap_dhgroups": [ 00:14:31.033 "null", 00:14:31.033 "ffdhe2048", 00:14:31.033 "ffdhe3072", 00:14:31.033 "ffdhe4096", 00:14:31.033 "ffdhe6144", 00:14:31.033 "ffdhe8192" 00:14:31.033 ] 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "bdev_nvme_set_hotplug", 00:14:31.033 "params": { 00:14:31.033 "period_us": 100000, 00:14:31.033 "enable": false 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "bdev_malloc_create", 00:14:31.033 "params": { 00:14:31.033 "name": "malloc0", 00:14:31.033 "num_blocks": 8192, 00:14:31.033 "block_size": 4096, 00:14:31.033 "physical_block_size": 4096, 00:14:31.033 "uuid": "2e7ef916-d165-4e36-8cac-7f0b8b9eda31", 00:14:31.033 "optimal_io_boundary": 0, 00:14:31.033 "md_size": 0, 00:14:31.033 "dif_type": 0, 00:14:31.033 "dif_is_head_of_md": false, 00:14:31.033 "dif_pi_format": 0 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "bdev_wait_for_examine" 00:14:31.033 } 00:14:31.033 ] 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "subsystem": "nbd", 00:14:31.033 "config": [] 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "subsystem": "scheduler", 00:14:31.033 "config": [ 00:14:31.033 { 00:14:31.033 "method": "framework_set_scheduler", 00:14:31.033 "params": { 00:14:31.033 "name": "static" 00:14:31.033 } 00:14:31.033 } 00:14:31.033 ] 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "subsystem": "nvmf", 00:14:31.033 "config": [ 00:14:31.033 { 00:14:31.033 "method": "nvmf_set_config", 00:14:31.033 "params": { 00:14:31.033 "discovery_filter": "match_any", 00:14:31.033 "admin_cmd_passthru": { 00:14:31.033 "identify_ctrlr": false 00:14:31.033 }, 00:14:31.033 "dhchap_digests": [ 00:14:31.033 "sha256", 00:14:31.033 "sha384", 00:14:31.033 "sha512" 00:14:31.033 ], 00:14:31.033 "dhchap_dhgroups": [ 00:14:31.033 "null", 00:14:31.033 "ffdhe2048", 00:14:31.033 "ffdhe3072", 00:14:31.033 "ffdhe4096", 00:14:31.033 "ffdhe6144", 00:14:31.033 "ffdhe8192" 00:14:31.033 ] 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "nvmf_set_max_subsystems", 00:14:31.033 "params": { 00:14:31.033 "max_subsystems": 1024 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "nvmf_set_crdt", 00:14:31.033 "params": { 00:14:31.033 "crdt1": 0, 00:14:31.033 "crdt2": 0, 00:14:31.033 "crdt3": 0 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "nvmf_create_transport", 00:14:31.033 "params": { 00:14:31.033 "trtype": "TCP", 00:14:31.033 "max_queue_depth": 128, 00:14:31.033 "max_io_qpairs_per_ctrlr": 127, 00:14:31.033 "in_capsule_data_size": 4096, 00:14:31.033 "max_io_size": 131072, 00:14:31.033 "io_unit_size": 131072, 00:14:31.033 "max_aq_depth": 128, 00:14:31.033 "num_shared_buffers": 511, 00:14:31.033 "buf_cache_size": 4294967295, 00:14:31.033 "dif_insert_or_strip": false, 00:14:31.033 "zcopy": false, 00:14:31.033 "c2h_success": false, 00:14:31.033 "sock_priority": 0, 00:14:31.033 "abort_timeout_sec": 1, 00:14:31.033 "ack_timeout": 0, 00:14:31.033 "data_wr_pool_size": 0 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "nvmf_create_subsystem", 00:14:31.033 "params": { 00:14:31.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.033 "allow_any_host": false, 00:14:31.033 "serial_number": "00000000000000000000", 00:14:31.033 "model_number": "SPDK bdev Controller", 00:14:31.033 "max_namespaces": 32, 00:14:31.033 "min_cntlid": 1, 00:14:31.033 "max_cntlid": 65519, 00:14:31.033 "ana_reporting": false 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "nvmf_subsystem_add_host", 00:14:31.033 "params": { 00:14:31.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.033 "host": "nqn.2016-06.io.spdk:host1", 00:14:31.033 "psk": "key0" 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "nvmf_subsystem_add_ns", 00:14:31.033 "params": { 00:14:31.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.033 "namespace": { 00:14:31.033 "nsid": 1, 00:14:31.033 "bdev_name": "malloc0", 00:14:31.033 "nguid": "2E7EF916D1654E368CAC7F0B8B9EDA31", 00:14:31.033 "uuid": "2e7ef916-d165-4e36-8cac-7f0b8b9eda31", 00:14:31.033 "no_auto_visible": false 00:14:31.033 } 00:14:31.033 } 00:14:31.033 }, 00:14:31.033 { 00:14:31.033 "method": "nvmf_subsystem_add_listener", 00:14:31.033 "params": { 00:14:31.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.033 "listen_address": { 00:14:31.033 "trtype": "TCP", 00:14:31.033 "adrfam": "IPv4", 00:14:31.033 "traddr": "10.0.0.3", 00:14:31.033 "trsvcid": "4420" 00:14:31.033 }, 00:14:31.033 "secure_channel": false, 00:14:31.033 "sock_impl": "ssl" 00:14:31.033 } 00:14:31.033 } 00:14:31.033 ] 00:14:31.033 } 00:14:31.033 ] 00:14:31.033 }' 00:14:31.033 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.033 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72227 00:14:31.033 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:31.033 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72227 00:14:31.033 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72227 ']' 00:14:31.033 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.033 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.033 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.033 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.034 10:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.034 [2024-11-15 10:58:17.813049] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:31.034 [2024-11-15 10:58:17.813161] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.292 [2024-11-15 10:58:17.950509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.292 [2024-11-15 10:58:17.993449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.292 [2024-11-15 10:58:17.993510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.292 [2024-11-15 10:58:17.993520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.292 [2024-11-15 10:58:17.993539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.292 [2024-11-15 10:58:17.993546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.292 [2024-11-15 10:58:17.993973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.551 [2024-11-15 10:58:18.175905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:31.551 [2024-11-15 10:58:18.263812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.551 [2024-11-15 10:58:18.295754] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:31.551 [2024-11-15 10:58:18.296006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72259 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72259 /var/tmp/bdevperf.sock 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72259 ']' 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:32.120 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:32.120 "subsystems": [ 00:14:32.120 { 00:14:32.120 "subsystem": "keyring", 00:14:32.120 "config": [ 00:14:32.120 { 00:14:32.120 "method": "keyring_file_add_key", 00:14:32.120 "params": { 00:14:32.120 "name": "key0", 00:14:32.120 "path": "/tmp/tmp.cv7zr8cbIf" 00:14:32.120 } 00:14:32.120 } 00:14:32.120 ] 00:14:32.120 }, 00:14:32.120 { 00:14:32.120 "subsystem": "iobuf", 00:14:32.120 "config": [ 00:14:32.120 { 00:14:32.120 "method": "iobuf_set_options", 00:14:32.120 "params": { 00:14:32.120 "small_pool_count": 8192, 00:14:32.120 "large_pool_count": 1024, 00:14:32.120 "small_bufsize": 8192, 00:14:32.120 "large_bufsize": 135168, 00:14:32.120 "enable_numa": false 00:14:32.120 } 00:14:32.120 } 00:14:32.120 ] 00:14:32.120 }, 00:14:32.120 { 00:14:32.120 "subsystem": "sock", 00:14:32.120 "config": [ 00:14:32.120 { 00:14:32.120 "method": "sock_set_default_impl", 00:14:32.120 "params": { 00:14:32.120 "impl_name": "uring" 00:14:32.120 } 00:14:32.120 }, 00:14:32.120 { 00:14:32.120 "method": "sock_impl_set_options", 00:14:32.120 "params": { 00:14:32.120 "impl_name": "ssl", 00:14:32.120 "recv_buf_size": 4096, 00:14:32.120 "send_buf_size": 4096, 00:14:32.120 "enable_recv_pipe": true, 00:14:32.120 "enable_quickack": false, 00:14:32.120 "enable_placement_id": 0, 00:14:32.120 "enable_zerocopy_send_server": true, 00:14:32.120 "enable_zerocopy_send_client": false, 00:14:32.120 "zerocopy_threshold": 0, 00:14:32.120 "tls_version": 0, 00:14:32.120 "enable_ktls": false 00:14:32.120 } 00:14:32.120 }, 00:14:32.120 { 00:14:32.120 "method": "sock_impl_set_options", 00:14:32.120 "params": { 00:14:32.120 "impl_name": "posix", 00:14:32.120 "recv_buf_size": 2097152, 00:14:32.120 "send_buf_size": 2097152, 00:14:32.120 "enable_recv_pipe": true, 00:14:32.120 "enable_quickack": false, 00:14:32.120 "enable_placement_id": 0, 00:14:32.120 "enable_zerocopy_send_server": true, 00:14:32.120 "enable_zerocopy_send_client": false, 00:14:32.120 "zerocopy_threshold": 0, 00:14:32.120 "tls_version": 0, 00:14:32.120 "enable_ktls": false 00:14:32.120 } 00:14:32.120 }, 00:14:32.120 { 00:14:32.120 "method": "sock_impl_set_options", 00:14:32.120 "params": { 00:14:32.120 "impl_name": "uring", 00:14:32.120 "recv_buf_size": 2097152, 00:14:32.120 "send_buf_size": 2097152, 00:14:32.120 "enable_recv_pipe": true, 00:14:32.121 "enable_quickack": false, 00:14:32.121 "enable_placement_id": 0, 00:14:32.121 "enable_zerocopy_send_server": false, 00:14:32.121 "enable_zerocopy_send_client": false, 00:14:32.121 "zerocopy_threshold": 0, 00:14:32.121 "tls_version": 0, 00:14:32.121 "enable_ktls": false 00:14:32.121 } 00:14:32.121 } 00:14:32.121 ] 00:14:32.121 }, 00:14:32.121 { 00:14:32.121 "subsystem": "vmd", 00:14:32.121 "config": [] 00:14:32.121 }, 00:14:32.121 { 00:14:32.121 "subsystem": "accel", 00:14:32.121 "config": [ 00:14:32.121 { 00:14:32.121 "method": "accel_set_options", 00:14:32.121 "params": { 00:14:32.121 "small_cache_size": 128, 00:14:32.121 "large_cache_size": 16, 00:14:32.121 "task_count": 2048, 00:14:32.121 "sequence_count": 2048, 00:14:32.121 "buf_count": 2048 00:14:32.121 } 00:14:32.121 } 00:14:32.121 ] 00:14:32.121 }, 00:14:32.121 { 00:14:32.121 "subsystem": "bdev", 00:14:32.121 "config": [ 00:14:32.121 { 00:14:32.121 "method": "bdev_set_options", 00:14:32.121 "params": { 00:14:32.121 "bdev_io_pool_size": 65535, 00:14:32.121 "bdev_io_cache_size": 256, 00:14:32.121 "bdev_auto_examine": true, 00:14:32.121 "iobuf_small_cache_size": 128, 00:14:32.121 "iobuf_large_cache_size": 16 00:14:32.121 } 00:14:32.121 }, 00:14:32.121 { 00:14:32.121 "method": "bdev_raid_set_options", 00:14:32.121 "params": { 00:14:32.121 "process_window_size_kb": 1024, 00:14:32.121 "process_max_bandwidth_mb_sec": 0 00:14:32.121 } 00:14:32.121 }, 00:14:32.121 { 00:14:32.121 "method": "bdev_iscsi_set_options", 00:14:32.121 "params": { 00:14:32.121 "timeout_sec": 30 00:14:32.121 } 00:14:32.121 }, 00:14:32.121 { 00:14:32.121 "method": "bdev_nvme_set_options", 00:14:32.121 "params": { 00:14:32.121 "action_on_timeout": "none", 00:14:32.121 "timeout_us": 0, 00:14:32.121 "timeout_admin_us": 0, 00:14:32.121 "keep_alive_timeout_ms": 10000, 00:14:32.121 "arbitration_burst": 0, 00:14:32.121 "low_priority_weight": 0, 00:14:32.121 "medium_priority_weight": 0, 00:14:32.121 "high_priority_weight": 0, 00:14:32.121 "nvme_adminq_poll_period_us": 10000, 00:14:32.121 "nvme_ioq_poll_period_us": 0, 00:14:32.121 "io_queue_requests": 512, 00:14:32.121 "delay_cmd_submit": true, 00:14:32.121 "transport_retry_count": 4, 00:14:32.121 "bdev_retry_count": 3, 00:14:32.121 "transport_ack_timeout": 0, 00:14:32.121 "ctrlr_loss_timeout_sec": 0, 00:14:32.121 "reconnect_delay_sec": 0, 00:14:32.121 "fast_io_fail_timeout_sec": 0, 00:14:32.121 "disable_auto_failback": false, 00:14:32.121 "generate_uuids": false, 00:14:32.121 "transport_tos": 0, 00:14:32.121 "nvme_error_stat": false, 00:14:32.121 "rdma_srq_size": 0, 00:14:32.121 "io_path_stat": false, 00:14:32.121 "allow_accel_sequence": false, 00:14:32.121 "rdma_max_cq_size": 0, 00:14:32.121 "rdma_cm_event_timeout_ms": 0, 00:14:32.121 "dhchap_digests": [ 00:14:32.121 "sha256", 00:14:32.121 "sha384", 00:14:32.121 "sha512" 00:14:32.121 ], 00:14:32.121 "dhchap_dhgroups": [ 00:14:32.121 "null", 00:14:32.121 "ffdhe2048", 00:14:32.121 "ffdhe3072", 00:14:32.121 "ffdhe4096", 00:14:32.121 "ffdhe6144", 00:14:32.121 "ffdhe8192" 00:14:32.121 ] 00:14:32.121 } 00:14:32.121 }, 00:14:32.121 { 00:14:32.121 "method": "bdev_nvme_attach_controller", 00:14:32.121 "params": { 00:14:32.121 "name": "nvme0", 00:14:32.121 "trtype": "TCP", 00:14:32.121 "adrfam": "IPv4", 00:14:32.121 "traddr": "10.0.0.3", 00:14:32.121 "trsvcid": "4420", 00:14:32.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.121 "prchk_reftag": false, 00:14:32.121 "prchk_guard": false, 00:14:32.121 "ctrlr_loss_timeout_sec": 0, 00:14:32.121 "reconnect_delay_sec": 0, 00:14:32.121 "fast_io_fail_timeout_sec": 0, 00:14:32.121 "psk": "key0", 00:14:32.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:32.121 "hdgst": false, 00:14:32.121 "ddgst": false, 00:14:32.121 "multipath": "multipath" 00:14:32.121 } 00:14:32.121 }, 00:14:32.121 { 00:14:32.121 "method": "bdev_nvme_set_hotplug", 00:14:32.121 "params": { 00:14:32.121 "period_us": 100000, 00:14:32.121 "enable": false 00:14:32.121 } 00:14:32.121 }, 00:14:32.121 { 00:14:32.121 "method": "bdev_enable_histogram", 00:14:32.121 "params": { 00:14:32.121 "name": "nvme0n1", 00:14:32.121 "enable": true 00:14:32.121 } 00:14:32.121 }, 00:14:32.121 { 00:14:32.121 "method": "bdev_wait_for_examine" 00:14:32.121 } 00:14:32.121 ] 00:14:32.121 }, 00:14:32.121 { 00:14:32.121 "subsystem": "nbd", 00:14:32.121 "config": [] 00:14:32.121 } 00:14:32.121 ] 00:14:32.121 }' 00:14:32.121 [2024-11-15 10:58:18.901369] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:32.121 [2024-11-15 10:58:18.901473] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72259 ] 00:14:32.381 [2024-11-15 10:58:19.050720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.381 [2024-11-15 10:58:19.113399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.640 [2024-11-15 10:58:19.251713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:32.640 [2024-11-15 10:58:19.301472] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:33.207 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.207 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:33.207 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:33.207 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:33.466 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.466 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:33.466 Running I/O for 1 seconds... 00:14:34.857 4583.00 IOPS, 17.90 MiB/s 00:14:34.857 Latency(us) 00:14:34.857 [2024-11-15T10:58:21.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.857 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:34.857 Verification LBA range: start 0x0 length 0x2000 00:14:34.857 nvme0n1 : 1.01 4644.96 18.14 0.00 0.00 27370.64 4468.36 23235.49 00:14:34.857 [2024-11-15T10:58:21.718Z] =================================================================================================================== 00:14:34.857 [2024-11-15T10:58:21.718Z] Total : 4644.96 18.14 0.00 0.00 27370.64 4468.36 23235.49 00:14:34.857 { 00:14:34.857 "results": [ 00:14:34.857 { 00:14:34.857 "job": "nvme0n1", 00:14:34.857 "core_mask": "0x2", 00:14:34.857 "workload": "verify", 00:14:34.857 "status": "finished", 00:14:34.857 "verify_range": { 00:14:34.857 "start": 0, 00:14:34.857 "length": 8192 00:14:34.857 }, 00:14:34.857 "queue_depth": 128, 00:14:34.857 "io_size": 4096, 00:14:34.857 "runtime": 1.014217, 00:14:34.857 "iops": 4644.962567182368, 00:14:34.857 "mibps": 18.144385028056124, 00:14:34.857 "io_failed": 0, 00:14:34.857 "io_timeout": 0, 00:14:34.857 "avg_latency_us": 27370.63849173115, 00:14:34.857 "min_latency_us": 4468.363636363636, 00:14:34.857 "max_latency_us": 23235.49090909091 00:14:34.857 } 00:14:34.857 ], 00:14:34.857 "core_count": 1 00:14:34.857 } 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:34.857 nvmf_trace.0 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72259 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72259 ']' 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72259 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72259 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:34.857 killing process with pid 72259 00:14:34.857 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72259' 00:14:34.857 Received shutdown signal, test time was about 1.000000 seconds 00:14:34.858 00:14:34.858 Latency(us) 00:14:34.858 [2024-11-15T10:58:21.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.858 [2024-11-15T10:58:21.719Z] =================================================================================================================== 00:14:34.858 [2024-11-15T10:58:21.719Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.858 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72259 00:14:34.858 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72259 00:14:34.858 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:34.858 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.858 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:34.858 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.858 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:34.858 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.858 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.858 rmmod nvme_tcp 00:14:35.116 rmmod nvme_fabrics 00:14:35.116 rmmod nvme_keyring 00:14:35.116 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:35.116 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:35.116 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:35.116 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72227 ']' 00:14:35.117 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72227 00:14:35.117 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72227 ']' 00:14:35.117 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72227 00:14:35.117 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:35.117 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.117 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72227 00:14:35.117 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.117 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.117 killing process with pid 72227 00:14:35.117 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72227' 00:14:35.117 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72227 00:14:35.117 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72227 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.376 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.07qtf3bDHm /tmp/tmp.JUluD6zCsF /tmp/tmp.cv7zr8cbIf 00:14:35.635 00:14:35.635 real 1m22.060s 00:14:35.635 user 2m8.388s 00:14:35.635 sys 0m29.701s 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.635 ************************************ 00:14:35.635 END TEST nvmf_tls 00:14:35.635 ************************************ 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.635 ************************************ 00:14:35.635 START TEST nvmf_fips 00:14:35.635 ************************************ 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:35.635 * Looking for test storage... 00:14:35.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:35.635 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:35.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.895 --rc genhtml_branch_coverage=1 00:14:35.895 --rc genhtml_function_coverage=1 00:14:35.895 --rc genhtml_legend=1 00:14:35.895 --rc geninfo_all_blocks=1 00:14:35.895 --rc geninfo_unexecuted_blocks=1 00:14:35.895 00:14:35.895 ' 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:35.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.895 --rc genhtml_branch_coverage=1 00:14:35.895 --rc genhtml_function_coverage=1 00:14:35.895 --rc genhtml_legend=1 00:14:35.895 --rc geninfo_all_blocks=1 00:14:35.895 --rc geninfo_unexecuted_blocks=1 00:14:35.895 00:14:35.895 ' 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:35.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.895 --rc genhtml_branch_coverage=1 00:14:35.895 --rc genhtml_function_coverage=1 00:14:35.895 --rc genhtml_legend=1 00:14:35.895 --rc geninfo_all_blocks=1 00:14:35.895 --rc geninfo_unexecuted_blocks=1 00:14:35.895 00:14:35.895 ' 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:35.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.895 --rc genhtml_branch_coverage=1 00:14:35.895 --rc genhtml_function_coverage=1 00:14:35.895 --rc genhtml_legend=1 00:14:35.895 --rc geninfo_all_blocks=1 00:14:35.895 --rc geninfo_unexecuted_blocks=1 00:14:35.895 00:14:35.895 ' 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.895 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:35.896 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:14:35.896 Error setting digest 00:14:35.896 4022163CDC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:35.896 4022163CDC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:35.896 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:35.897 Cannot find device "nvmf_init_br" 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:35.897 Cannot find device "nvmf_init_br2" 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:35.897 Cannot find device "nvmf_tgt_br" 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:35.897 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:36.156 Cannot find device "nvmf_tgt_br2" 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:36.156 Cannot find device "nvmf_init_br" 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:36.156 Cannot find device "nvmf_init_br2" 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:36.156 Cannot find device "nvmf_tgt_br" 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:36.156 Cannot find device "nvmf_tgt_br2" 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:36.156 Cannot find device "nvmf_br" 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:36.156 Cannot find device "nvmf_init_if" 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:36.156 Cannot find device "nvmf_init_if2" 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:36.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:36.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:36.156 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:36.415 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:36.415 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:36.415 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:36.415 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:36.415 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:36.415 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:36.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:36.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:36.416 00:14:36.416 --- 10.0.0.3 ping statistics --- 00:14:36.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.416 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:36.416 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:36.416 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:14:36.416 00:14:36.416 --- 10.0.0.4 ping statistics --- 00:14:36.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.416 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:36.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:36.416 00:14:36.416 --- 10.0.0.1 ping statistics --- 00:14:36.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.416 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:36.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:14:36.416 00:14:36.416 --- 10.0.0.2 ping statistics --- 00:14:36.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.416 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72571 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72571 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72571 ']' 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.416 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:36.416 [2024-11-15 10:58:23.274133] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:36.416 [2024-11-15 10:58:23.274233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.675 [2024-11-15 10:58:23.426844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.675 [2024-11-15 10:58:23.485290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.675 [2024-11-15 10:58:23.485355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.675 [2024-11-15 10:58:23.485371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.675 [2024-11-15 10:58:23.485382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.675 [2024-11-15 10:58:23.485392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.675 [2024-11-15 10:58:23.485939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.934 [2024-11-15 10:58:23.562569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.cLS 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.cLS 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.cLS 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.cLS 00:14:37.502 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.761 [2024-11-15 10:58:24.554015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.761 [2024-11-15 10:58:24.570007] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:37.761 [2024-11-15 10:58:24.570297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:37.761 malloc0 00:14:38.025 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:38.025 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:38.026 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72614 00:14:38.026 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72614 /var/tmp/bdevperf.sock 00:14:38.026 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72614 ']' 00:14:38.026 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.026 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.026 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.026 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.026 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:38.026 [2024-11-15 10:58:24.695694] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:38.026 [2024-11-15 10:58:24.695801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72614 ] 00:14:38.026 [2024-11-15 10:58:24.843498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.287 [2024-11-15 10:58:24.902230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.287 [2024-11-15 10:58:24.958636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:38.287 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.287 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:38.287 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.cLS 00:14:38.546 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:38.806 [2024-11-15 10:58:25.564612] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.806 TLSTESTn1 00:14:38.806 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:39.065 Running I/O for 10 seconds... 00:14:40.939 4448.00 IOPS, 17.38 MiB/s [2024-11-15T10:58:29.177Z] 4525.00 IOPS, 17.68 MiB/s [2024-11-15T10:58:30.111Z] 4526.33 IOPS, 17.68 MiB/s [2024-11-15T10:58:31.048Z] 4535.75 IOPS, 17.72 MiB/s [2024-11-15T10:58:31.984Z] 4527.80 IOPS, 17.69 MiB/s [2024-11-15T10:58:32.921Z] 4531.67 IOPS, 17.70 MiB/s [2024-11-15T10:58:33.857Z] 4502.43 IOPS, 17.59 MiB/s [2024-11-15T10:58:34.845Z] 4479.38 IOPS, 17.50 MiB/s [2024-11-15T10:58:35.780Z] 4480.56 IOPS, 17.50 MiB/s [2024-11-15T10:58:36.039Z] 4496.90 IOPS, 17.57 MiB/s 00:14:49.178 Latency(us) 00:14:49.178 [2024-11-15T10:58:36.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.178 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:49.178 Verification LBA range: start 0x0 length 0x2000 00:14:49.178 TLSTESTn1 : 10.02 4500.89 17.58 0.00 0.00 28382.26 6434.44 28478.37 00:14:49.178 [2024-11-15T10:58:36.039Z] =================================================================================================================== 00:14:49.178 [2024-11-15T10:58:36.039Z] Total : 4500.89 17.58 0.00 0.00 28382.26 6434.44 28478.37 00:14:49.178 { 00:14:49.178 "results": [ 00:14:49.178 { 00:14:49.178 "job": "TLSTESTn1", 00:14:49.178 "core_mask": "0x4", 00:14:49.178 "workload": "verify", 00:14:49.178 "status": "finished", 00:14:49.178 "verify_range": { 00:14:49.178 "start": 0, 00:14:49.178 "length": 8192 00:14:49.178 }, 00:14:49.178 "queue_depth": 128, 00:14:49.178 "io_size": 4096, 00:14:49.178 "runtime": 10.018472, 00:14:49.178 "iops": 4500.885963448318, 00:14:49.178 "mibps": 17.58158579471999, 00:14:49.178 "io_failed": 0, 00:14:49.178 "io_timeout": 0, 00:14:49.178 "avg_latency_us": 28382.257291194568, 00:14:49.178 "min_latency_us": 6434.443636363636, 00:14:49.178 "max_latency_us": 28478.37090909091 00:14:49.178 } 00:14:49.178 ], 00:14:49.178 "core_count": 1 00:14:49.178 } 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:49.178 nvmf_trace.0 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72614 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72614 ']' 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72614 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72614 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:49.178 killing process with pid 72614 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72614' 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72614 00:14:49.178 Received shutdown signal, test time was about 10.000000 seconds 00:14:49.178 00:14:49.178 Latency(us) 00:14:49.178 [2024-11-15T10:58:36.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.178 [2024-11-15T10:58:36.039Z] =================================================================================================================== 00:14:49.178 [2024-11-15T10:58:36.039Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:49.178 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72614 00:14:49.438 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:49.438 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:49.438 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:49.438 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:49.438 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:49.438 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:49.438 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:49.438 rmmod nvme_tcp 00:14:49.438 rmmod nvme_fabrics 00:14:49.438 rmmod nvme_keyring 00:14:49.438 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72571 ']' 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72571 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72571 ']' 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72571 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72571 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:49.698 killing process with pid 72571 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72571' 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72571 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72571 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:49.698 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:49.957 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:49.957 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:49.957 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.957 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:49.957 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:49.957 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:49.957 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:49.957 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:49.957 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:49.957 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:49.958 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:49.958 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:49.958 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:49.958 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.958 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.958 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.cLS 00:14:50.217 00:14:50.217 real 0m14.490s 00:14:50.217 user 0m19.048s 00:14:50.217 sys 0m6.058s 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:50.217 ************************************ 00:14:50.217 END TEST nvmf_fips 00:14:50.217 ************************************ 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.217 ************************************ 00:14:50.217 START TEST nvmf_control_msg_list 00:14:50.217 ************************************ 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:50.217 * Looking for test storage... 00:14:50.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:14:50.217 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:50.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.217 --rc genhtml_branch_coverage=1 00:14:50.217 --rc genhtml_function_coverage=1 00:14:50.217 --rc genhtml_legend=1 00:14:50.217 --rc geninfo_all_blocks=1 00:14:50.217 --rc geninfo_unexecuted_blocks=1 00:14:50.217 00:14:50.217 ' 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:50.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.217 --rc genhtml_branch_coverage=1 00:14:50.217 --rc genhtml_function_coverage=1 00:14:50.217 --rc genhtml_legend=1 00:14:50.217 --rc geninfo_all_blocks=1 00:14:50.217 --rc geninfo_unexecuted_blocks=1 00:14:50.217 00:14:50.217 ' 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:50.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.217 --rc genhtml_branch_coverage=1 00:14:50.217 --rc genhtml_function_coverage=1 00:14:50.217 --rc genhtml_legend=1 00:14:50.217 --rc geninfo_all_blocks=1 00:14:50.217 --rc geninfo_unexecuted_blocks=1 00:14:50.217 00:14:50.217 ' 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:50.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.217 --rc genhtml_branch_coverage=1 00:14:50.217 --rc genhtml_function_coverage=1 00:14:50.217 --rc genhtml_legend=1 00:14:50.217 --rc geninfo_all_blocks=1 00:14:50.217 --rc geninfo_unexecuted_blocks=1 00:14:50.217 00:14:50.217 ' 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.217 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.478 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:50.478 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:50.479 Cannot find device "nvmf_init_br" 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:50.479 Cannot find device "nvmf_init_br2" 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:50.479 Cannot find device "nvmf_tgt_br" 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.479 Cannot find device "nvmf_tgt_br2" 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:50.479 Cannot find device "nvmf_init_br" 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:50.479 Cannot find device "nvmf_init_br2" 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:50.479 Cannot find device "nvmf_tgt_br" 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:50.479 Cannot find device "nvmf_tgt_br2" 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:50.479 Cannot find device "nvmf_br" 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:50.479 Cannot find device "nvmf_init_if" 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:50.479 Cannot find device "nvmf_init_if2" 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:50.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:50.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:50.479 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:50.739 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:50.739 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:50.739 00:14:50.739 --- 10.0.0.3 ping statistics --- 00:14:50.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.739 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:50.739 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:50.739 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:14:50.739 00:14:50.739 --- 10.0.0.4 ping statistics --- 00:14:50.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.739 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:50.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:50.739 00:14:50.739 --- 10.0.0.1 ping statistics --- 00:14:50.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.739 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:50.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:50.739 00:14:50.739 --- 10.0.0.2 ping statistics --- 00:14:50.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.739 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=72992 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 72992 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 72992 ']' 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.739 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:50.739 [2024-11-15 10:58:37.551032] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:50.740 [2024-11-15 10:58:37.551125] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.999 [2024-11-15 10:58:37.702209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.999 [2024-11-15 10:58:37.759101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.999 [2024-11-15 10:58:37.759169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.999 [2024-11-15 10:58:37.759183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.999 [2024-11-15 10:58:37.759193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.999 [2024-11-15 10:58:37.759202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.999 [2024-11-15 10:58:37.759680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.999 [2024-11-15 10:58:37.818328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:51.258 [2024-11-15 10:58:37.932104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:51.258 Malloc0 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:51.258 [2024-11-15 10:58:37.971151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73017 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73018 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73019 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73017 00:14:51.258 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:51.518 [2024-11-15 10:58:38.159769] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:51.518 [2024-11-15 10:58:38.159968] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:51.518 [2024-11-15 10:58:38.169826] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:52.454 Initializing NVMe Controllers 00:14:52.454 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:52.454 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:52.454 Initialization complete. Launching workers. 00:14:52.454 ======================================================== 00:14:52.454 Latency(us) 00:14:52.454 Device Information : IOPS MiB/s Average min max 00:14:52.454 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3621.00 14.14 275.84 198.54 574.05 00:14:52.454 ======================================================== 00:14:52.454 Total : 3621.00 14.14 275.84 198.54 574.05 00:14:52.454 00:14:52.454 Initializing NVMe Controllers 00:14:52.454 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:52.454 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:52.454 Initialization complete. Launching workers. 00:14:52.454 ======================================================== 00:14:52.454 Latency(us) 00:14:52.454 Device Information : IOPS MiB/s Average min max 00:14:52.454 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3620.00 14.14 275.90 175.77 577.65 00:14:52.454 ======================================================== 00:14:52.454 Total : 3620.00 14.14 275.90 175.77 577.65 00:14:52.455 00:14:52.455 Initializing NVMe Controllers 00:14:52.455 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:52.455 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:52.455 Initialization complete. Launching workers. 00:14:52.455 ======================================================== 00:14:52.455 Latency(us) 00:14:52.455 Device Information : IOPS MiB/s Average min max 00:14:52.455 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3648.00 14.25 273.73 115.61 585.28 00:14:52.455 ======================================================== 00:14:52.455 Total : 3648.00 14.25 273.73 115.61 585.28 00:14:52.455 00:14:52.455 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73018 00:14:52.455 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73019 00:14:52.455 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:52.455 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:52.455 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:52.455 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:52.455 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:52.455 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:52.455 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.455 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.455 rmmod nvme_tcp 00:14:52.455 rmmod nvme_fabrics 00:14:52.455 rmmod nvme_keyring 00:14:52.455 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 72992 ']' 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 72992 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 72992 ']' 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 72992 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72992 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.714 killing process with pid 72992 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72992' 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 72992 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 72992 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:52.714 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:52.973 00:14:52.973 real 0m2.926s 00:14:52.973 user 0m4.754s 00:14:52.973 sys 0m1.360s 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.973 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:52.973 ************************************ 00:14:52.973 END TEST nvmf_control_msg_list 00:14:52.973 ************************************ 00:14:53.233 10:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:53.233 10:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:53.233 10:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.233 10:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:53.233 ************************************ 00:14:53.233 START TEST nvmf_wait_for_buf 00:14:53.233 ************************************ 00:14:53.233 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:53.233 * Looking for test storage... 00:14:53.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:53.233 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:53.233 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:14:53.233 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:53.233 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:53.233 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.233 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:53.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.233 --rc genhtml_branch_coverage=1 00:14:53.233 --rc genhtml_function_coverage=1 00:14:53.233 --rc genhtml_legend=1 00:14:53.233 --rc geninfo_all_blocks=1 00:14:53.233 --rc geninfo_unexecuted_blocks=1 00:14:53.233 00:14:53.233 ' 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:53.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.233 --rc genhtml_branch_coverage=1 00:14:53.233 --rc genhtml_function_coverage=1 00:14:53.233 --rc genhtml_legend=1 00:14:53.233 --rc geninfo_all_blocks=1 00:14:53.233 --rc geninfo_unexecuted_blocks=1 00:14:53.233 00:14:53.233 ' 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:53.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.233 --rc genhtml_branch_coverage=1 00:14:53.233 --rc genhtml_function_coverage=1 00:14:53.233 --rc genhtml_legend=1 00:14:53.233 --rc geninfo_all_blocks=1 00:14:53.233 --rc geninfo_unexecuted_blocks=1 00:14:53.233 00:14:53.233 ' 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:53.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.233 --rc genhtml_branch_coverage=1 00:14:53.233 --rc genhtml_function_coverage=1 00:14:53.233 --rc genhtml_legend=1 00:14:53.233 --rc geninfo_all_blocks=1 00:14:53.233 --rc geninfo_unexecuted_blocks=1 00:14:53.233 00:14:53.233 ' 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.233 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:53.234 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:53.234 Cannot find device "nvmf_init_br" 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:53.234 Cannot find device "nvmf_init_br2" 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:53.234 Cannot find device "nvmf_tgt_br" 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:53.234 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.494 Cannot find device "nvmf_tgt_br2" 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:53.494 Cannot find device "nvmf_init_br" 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:53.494 Cannot find device "nvmf_init_br2" 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:53.494 Cannot find device "nvmf_tgt_br" 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:53.494 Cannot find device "nvmf_tgt_br2" 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:53.494 Cannot find device "nvmf_br" 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:53.494 Cannot find device "nvmf_init_if" 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:53.494 Cannot find device "nvmf_init_if2" 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:53.494 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:53.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:53.754 00:14:53.754 --- 10.0.0.3 ping statistics --- 00:14:53.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.754 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:53.754 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:53.754 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:14:53.754 00:14:53.754 --- 10.0.0.4 ping statistics --- 00:14:53.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.754 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:53.754 00:14:53.754 --- 10.0.0.1 ping statistics --- 00:14:53.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.754 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:53.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:53.754 00:14:53.754 --- 10.0.0.2 ping statistics --- 00:14:53.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.754 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73255 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73255 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73255 ']' 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.754 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:53.754 [2024-11-15 10:58:40.496404] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:53.754 [2024-11-15 10:58:40.497079] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.013 [2024-11-15 10:58:40.644988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.013 [2024-11-15 10:58:40.687100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.013 [2024-11-15 10:58:40.687164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.013 [2024-11-15 10:58:40.687189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.013 [2024-11-15 10:58:40.687196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.014 [2024-11-15 10:58:40.687203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.014 [2024-11-15 10:58:40.687577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.014 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.014 [2024-11-15 10:58:40.846926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.273 Malloc0 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.273 [2024-11-15 10:58:40.911451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:54.273 [2024-11-15 10:58:40.939590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.273 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:54.532 [2024-11-15 10:58:41.136725] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:55.909 Initializing NVMe Controllers 00:14:55.909 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:55.909 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:55.909 Initialization complete. Launching workers. 00:14:55.909 ======================================================== 00:14:55.909 Latency(us) 00:14:55.909 Device Information : IOPS MiB/s Average min max 00:14:55.909 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.99 62.50 8000.61 6969.87 10943.44 00:14:55.909 ======================================================== 00:14:55.909 Total : 499.99 62.50 8000.61 6969.87 10943.44 00:14:55.909 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:55.909 rmmod nvme_tcp 00:14:55.909 rmmod nvme_fabrics 00:14:55.909 rmmod nvme_keyring 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73255 ']' 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73255 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73255 ']' 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73255 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.909 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73255 00:14:55.910 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.910 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.910 killing process with pid 73255 00:14:55.910 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73255' 00:14:55.910 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73255 00:14:55.910 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73255 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.168 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.168 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:56.168 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.168 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.168 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.425 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:56.425 00:14:56.425 real 0m3.179s 00:14:56.425 user 0m2.529s 00:14:56.425 sys 0m0.786s 00:14:56.425 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.425 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:56.425 ************************************ 00:14:56.425 END TEST nvmf_wait_for_buf 00:14:56.425 ************************************ 00:14:56.425 10:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:56.425 10:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:56.425 10:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.426 ************************************ 00:14:56.426 START TEST nvmf_nsid 00:14:56.426 ************************************ 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:56.426 * Looking for test storage... 00:14:56.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:56.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.426 --rc genhtml_branch_coverage=1 00:14:56.426 --rc genhtml_function_coverage=1 00:14:56.426 --rc genhtml_legend=1 00:14:56.426 --rc geninfo_all_blocks=1 00:14:56.426 --rc geninfo_unexecuted_blocks=1 00:14:56.426 00:14:56.426 ' 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:56.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.426 --rc genhtml_branch_coverage=1 00:14:56.426 --rc genhtml_function_coverage=1 00:14:56.426 --rc genhtml_legend=1 00:14:56.426 --rc geninfo_all_blocks=1 00:14:56.426 --rc geninfo_unexecuted_blocks=1 00:14:56.426 00:14:56.426 ' 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:56.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.426 --rc genhtml_branch_coverage=1 00:14:56.426 --rc genhtml_function_coverage=1 00:14:56.426 --rc genhtml_legend=1 00:14:56.426 --rc geninfo_all_blocks=1 00:14:56.426 --rc geninfo_unexecuted_blocks=1 00:14:56.426 00:14:56.426 ' 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:56.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.426 --rc genhtml_branch_coverage=1 00:14:56.426 --rc genhtml_function_coverage=1 00:14:56.426 --rc genhtml_legend=1 00:14:56.426 --rc geninfo_all_blocks=1 00:14:56.426 --rc geninfo_unexecuted_blocks=1 00:14:56.426 00:14:56.426 ' 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.426 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:14:56.684 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.685 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:56.685 Cannot find device "nvmf_init_br" 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:56.685 Cannot find device "nvmf_init_br2" 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:56.685 Cannot find device "nvmf_tgt_br" 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.685 Cannot find device "nvmf_tgt_br2" 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:56.685 Cannot find device "nvmf_init_br" 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:56.685 Cannot find device "nvmf_init_br2" 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:56.685 Cannot find device "nvmf_tgt_br" 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:56.685 Cannot find device "nvmf_tgt_br2" 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:56.685 Cannot find device "nvmf_br" 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:56.685 Cannot find device "nvmf_init_if" 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:56.685 Cannot find device "nvmf_init_if2" 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.685 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:56.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:14:56.945 00:14:56.945 --- 10.0.0.3 ping statistics --- 00:14:56.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.945 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:56.945 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:56.945 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:14:56.945 00:14:56.945 --- 10.0.0.4 ping statistics --- 00:14:56.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.945 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:56.945 00:14:56.945 --- 10.0.0.1 ping statistics --- 00:14:56.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.945 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:56.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:14:56.945 00:14:56.945 --- 10.0.0.2 ping statistics --- 00:14:56.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.945 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73508 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73508 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73508 ']' 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.945 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:56.946 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:14:56.946 [2024-11-15 10:58:43.784064] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:56.946 [2024-11-15 10:58:43.784158] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.205 [2024-11-15 10:58:43.935254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.205 [2024-11-15 10:58:43.991051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.205 [2024-11-15 10:58:43.991108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.205 [2024-11-15 10:58:43.991122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.205 [2024-11-15 10:58:43.991132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.205 [2024-11-15 10:58:43.991142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.205 [2024-11-15 10:58:43.991578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.205 [2024-11-15 10:58:44.047916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73538 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=530929e9-d7b0-464c-8963-cf2f9a723c6c 00:14:57.464 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=22854988-6b49-4da5-8b0a-b95279811d67 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=0822bd25-0acc-47bc-84a9-e6a84b4d3e79 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:57.465 null0 00:14:57.465 null1 00:14:57.465 null2 00:14:57.465 [2024-11-15 10:58:44.212365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.465 [2024-11-15 10:58:44.229921] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:57.465 [2024-11-15 10:58:44.230033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73538 ] 00:14:57.465 [2024-11-15 10:58:44.236494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73538 /var/tmp/tgt2.sock 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73538 ']' 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:14:57.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.465 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:57.724 [2024-11-15 10:58:44.381018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.724 [2024-11-15 10:58:44.437729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.724 [2024-11-15 10:58:44.511347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.983 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.983 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:14:57.983 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:58.549 [2024-11-15 10:58:45.117990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.549 [2024-11-15 10:58:45.134044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:58.549 nvme0n1 nvme0n2 00:14:58.549 nvme1n1 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:58.549 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:14:58.550 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:14:58.550 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:14:59.486 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:59.486 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:59.486 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:59.486 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 530929e9-d7b0-464c-8963-cf2f9a723c6c 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=530929e9d7b0464c8963cf2f9a723c6c 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 530929E9D7B0464C8963CF2F9A723C6C 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 530929E9D7B0464C8963CF2F9A723C6C == \5\3\0\9\2\9\E\9\D\7\B\0\4\6\4\C\8\9\6\3\C\F\2\F\9\A\7\2\3\C\6\C ]] 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 22854988-6b49-4da5-8b0a-b95279811d67 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:59.745 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=228549886b494da58b0ab95279811d67 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 228549886B494DA58B0AB95279811D67 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 228549886B494DA58B0AB95279811D67 == \2\2\8\5\4\9\8\8\6\B\4\9\4\D\A\5\8\B\0\A\B\9\5\2\7\9\8\1\1\D\6\7 ]] 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 0822bd25-0acc-47bc-84a9-e6a84b4d3e79 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0822bd250acc47bc84a9e6a84b4d3e79 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0822BD250ACC47BC84A9E6A84B4D3E79 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 0822BD250ACC47BC84A9E6A84B4D3E79 == \0\8\2\2\B\D\2\5\0\A\C\C\4\7\B\C\8\4\A\9\E\6\A\8\4\B\4\D\3\E\7\9 ]] 00:14:59.746 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73538 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73538 ']' 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73538 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73538 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:00.005 killing process with pid 73538 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73538' 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73538 00:15:00.005 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73538 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:00.582 rmmod nvme_tcp 00:15:00.582 rmmod nvme_fabrics 00:15:00.582 rmmod nvme_keyring 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73508 ']' 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73508 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73508 ']' 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73508 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73508 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.582 killing process with pid 73508 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73508' 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73508 00:15:00.582 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73508 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:00.841 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:01.100 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:01.100 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:01.100 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.100 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.100 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.100 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:15:01.100 00:15:01.100 real 0m4.672s 00:15:01.100 user 0m6.918s 00:15:01.100 sys 0m1.652s 00:15:01.100 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.100 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:01.100 ************************************ 00:15:01.100 END TEST nvmf_nsid 00:15:01.100 ************************************ 00:15:01.100 10:58:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:01.100 00:15:01.100 real 4m54.336s 00:15:01.100 user 10m10.999s 00:15:01.100 sys 1m9.705s 00:15:01.100 10:58:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.100 10:58:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.100 ************************************ 00:15:01.100 END TEST nvmf_target_extra 00:15:01.100 ************************************ 00:15:01.100 10:58:47 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:01.100 10:58:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.101 10:58:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.101 10:58:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:01.101 ************************************ 00:15:01.101 START TEST nvmf_host 00:15:01.101 ************************************ 00:15:01.101 10:58:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:01.101 * Looking for test storage... 00:15:01.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:01.101 10:58:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:01.101 10:58:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:01.101 10:58:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.360 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:01.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.361 --rc genhtml_branch_coverage=1 00:15:01.361 --rc genhtml_function_coverage=1 00:15:01.361 --rc genhtml_legend=1 00:15:01.361 --rc geninfo_all_blocks=1 00:15:01.361 --rc geninfo_unexecuted_blocks=1 00:15:01.361 00:15:01.361 ' 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:01.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.361 --rc genhtml_branch_coverage=1 00:15:01.361 --rc genhtml_function_coverage=1 00:15:01.361 --rc genhtml_legend=1 00:15:01.361 --rc geninfo_all_blocks=1 00:15:01.361 --rc geninfo_unexecuted_blocks=1 00:15:01.361 00:15:01.361 ' 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:01.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.361 --rc genhtml_branch_coverage=1 00:15:01.361 --rc genhtml_function_coverage=1 00:15:01.361 --rc genhtml_legend=1 00:15:01.361 --rc geninfo_all_blocks=1 00:15:01.361 --rc geninfo_unexecuted_blocks=1 00:15:01.361 00:15:01.361 ' 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:01.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.361 --rc genhtml_branch_coverage=1 00:15:01.361 --rc genhtml_function_coverage=1 00:15:01.361 --rc genhtml_legend=1 00:15:01.361 --rc geninfo_all_blocks=1 00:15:01.361 --rc geninfo_unexecuted_blocks=1 00:15:01.361 00:15:01.361 ' 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.361 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:01.361 ************************************ 00:15:01.361 START TEST nvmf_identify 00:15:01.361 ************************************ 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:01.361 * Looking for test storage... 00:15:01.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.361 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:01.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.622 --rc genhtml_branch_coverage=1 00:15:01.622 --rc genhtml_function_coverage=1 00:15:01.622 --rc genhtml_legend=1 00:15:01.622 --rc geninfo_all_blocks=1 00:15:01.622 --rc geninfo_unexecuted_blocks=1 00:15:01.622 00:15:01.622 ' 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:01.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.622 --rc genhtml_branch_coverage=1 00:15:01.622 --rc genhtml_function_coverage=1 00:15:01.622 --rc genhtml_legend=1 00:15:01.622 --rc geninfo_all_blocks=1 00:15:01.622 --rc geninfo_unexecuted_blocks=1 00:15:01.622 00:15:01.622 ' 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:01.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.622 --rc genhtml_branch_coverage=1 00:15:01.622 --rc genhtml_function_coverage=1 00:15:01.622 --rc genhtml_legend=1 00:15:01.622 --rc geninfo_all_blocks=1 00:15:01.622 --rc geninfo_unexecuted_blocks=1 00:15:01.622 00:15:01.622 ' 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:01.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.622 --rc genhtml_branch_coverage=1 00:15:01.622 --rc genhtml_function_coverage=1 00:15:01.622 --rc genhtml_legend=1 00:15:01.622 --rc geninfo_all_blocks=1 00:15:01.622 --rc geninfo_unexecuted_blocks=1 00:15:01.622 00:15:01.622 ' 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.622 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.622 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:01.623 Cannot find device "nvmf_init_br" 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:01.623 Cannot find device "nvmf_init_br2" 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:01.623 Cannot find device "nvmf_tgt_br" 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:01.623 Cannot find device "nvmf_tgt_br2" 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:01.623 Cannot find device "nvmf_init_br" 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:01.623 Cannot find device "nvmf_init_br2" 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:01.623 Cannot find device "nvmf_tgt_br" 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:01.623 Cannot find device "nvmf_tgt_br2" 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:01.623 Cannot find device "nvmf_br" 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:01.623 Cannot find device "nvmf_init_if" 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:01.623 Cannot find device "nvmf_init_if2" 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:01.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:01.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:01.623 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:01.883 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:01.883 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:15:01.883 00:15:01.883 --- 10.0.0.3 ping statistics --- 00:15:01.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.883 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:01.883 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:01.883 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:01.883 00:15:01.883 --- 10.0.0.4 ping statistics --- 00:15:01.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.883 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:01.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:15:01.883 00:15:01.883 --- 10.0.0.1 ping statistics --- 00:15:01.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.883 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:01.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:01.883 00:15:01.883 --- 10.0.0.2 ping statistics --- 00:15:01.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.883 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73890 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73890 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 73890 ']' 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:01.883 10:58:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.883 [2024-11-15 10:58:48.723112] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:01.883 [2024-11-15 10:58:48.723202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.142 [2024-11-15 10:58:48.876653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.142 [2024-11-15 10:58:48.933970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.143 [2024-11-15 10:58:48.934022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.143 [2024-11-15 10:58:48.934048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.143 [2024-11-15 10:58:48.934059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.143 [2024-11-15 10:58:48.934068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.143 [2024-11-15 10:58:48.935377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.143 [2024-11-15 10:58:48.935516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.143 [2024-11-15 10:58:48.935640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.143 [2024-11-15 10:58:48.935642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.143 [2024-11-15 10:58:48.992876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:03.080 [2024-11-15 10:58:49.723993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:03.080 Malloc0 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:03.080 [2024-11-15 10:58:49.824464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:03.080 [ 00:15:03.080 { 00:15:03.080 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:03.080 "subtype": "Discovery", 00:15:03.080 "listen_addresses": [ 00:15:03.080 { 00:15:03.080 "trtype": "TCP", 00:15:03.080 "adrfam": "IPv4", 00:15:03.080 "traddr": "10.0.0.3", 00:15:03.080 "trsvcid": "4420" 00:15:03.080 } 00:15:03.080 ], 00:15:03.080 "allow_any_host": true, 00:15:03.080 "hosts": [] 00:15:03.080 }, 00:15:03.080 { 00:15:03.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.080 "subtype": "NVMe", 00:15:03.080 "listen_addresses": [ 00:15:03.080 { 00:15:03.080 "trtype": "TCP", 00:15:03.080 "adrfam": "IPv4", 00:15:03.080 "traddr": "10.0.0.3", 00:15:03.080 "trsvcid": "4420" 00:15:03.080 } 00:15:03.080 ], 00:15:03.080 "allow_any_host": true, 00:15:03.080 "hosts": [], 00:15:03.080 "serial_number": "SPDK00000000000001", 00:15:03.080 "model_number": "SPDK bdev Controller", 00:15:03.080 "max_namespaces": 32, 00:15:03.080 "min_cntlid": 1, 00:15:03.080 "max_cntlid": 65519, 00:15:03.080 "namespaces": [ 00:15:03.080 { 00:15:03.080 "nsid": 1, 00:15:03.080 "bdev_name": "Malloc0", 00:15:03.080 "name": "Malloc0", 00:15:03.080 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:03.080 "eui64": "ABCDEF0123456789", 00:15:03.080 "uuid": "c182177f-255b-45ec-b5d9-e498d58df194" 00:15:03.080 } 00:15:03.080 ] 00:15:03.080 } 00:15:03.080 ] 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.080 10:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:03.080 [2024-11-15 10:58:49.877942] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:03.080 [2024-11-15 10:58:49.878004] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73925 ] 00:15:03.342 [2024-11-15 10:58:50.029605] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:15:03.342 [2024-11-15 10:58:50.029693] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:03.342 [2024-11-15 10:58:50.029699] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:03.342 [2024-11-15 10:58:50.029711] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:03.342 [2024-11-15 10:58:50.029720] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:03.342 [2024-11-15 10:58:50.030028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:15:03.342 [2024-11-15 10:58:50.030099] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4e8750 0 00:15:03.342 [2024-11-15 10:58:50.036580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:03.342 [2024-11-15 10:58:50.036604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:03.342 [2024-11-15 10:58:50.036625] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:03.342 [2024-11-15 10:58:50.036629] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:03.342 [2024-11-15 10:58:50.036659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.036666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.036670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e8750) 00:15:03.342 [2024-11-15 10:58:50.036683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:03.342 [2024-11-15 10:58:50.036713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54c740, cid 0, qid 0 00:15:03.342 [2024-11-15 10:58:50.044592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.342 [2024-11-15 10:58:50.044612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.342 [2024-11-15 10:58:50.044633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.044638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54c740) on tqpair=0x4e8750 00:15:03.342 [2024-11-15 10:58:50.044651] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:03.342 [2024-11-15 10:58:50.044659] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:15:03.342 [2024-11-15 10:58:50.044665] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:15:03.342 [2024-11-15 10:58:50.044681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.044687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.044691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e8750) 00:15:03.342 [2024-11-15 10:58:50.044700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.342 [2024-11-15 10:58:50.044726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54c740, cid 0, qid 0 00:15:03.342 [2024-11-15 10:58:50.044789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.342 [2024-11-15 10:58:50.044796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.342 [2024-11-15 10:58:50.044800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.044804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54c740) on tqpair=0x4e8750 00:15:03.342 [2024-11-15 10:58:50.044810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:15:03.342 [2024-11-15 10:58:50.044818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:15:03.342 [2024-11-15 10:58:50.044825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.044830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.044833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e8750) 00:15:03.342 [2024-11-15 10:58:50.044841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.342 [2024-11-15 10:58:50.044874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54c740, cid 0, qid 0 00:15:03.342 [2024-11-15 10:58:50.044942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.342 [2024-11-15 10:58:50.044949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.342 [2024-11-15 10:58:50.044953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.044957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54c740) on tqpair=0x4e8750 00:15:03.342 [2024-11-15 10:58:50.044964] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:15:03.342 [2024-11-15 10:58:50.044972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:03.342 [2024-11-15 10:58:50.044980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.044985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.044988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e8750) 00:15:03.342 [2024-11-15 10:58:50.044996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.342 [2024-11-15 10:58:50.045014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54c740, cid 0, qid 0 00:15:03.342 [2024-11-15 10:58:50.045065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.342 [2024-11-15 10:58:50.045072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.342 [2024-11-15 10:58:50.045075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.045080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54c740) on tqpair=0x4e8750 00:15:03.342 [2024-11-15 10:58:50.045086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:03.342 [2024-11-15 10:58:50.045096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.045101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.342 [2024-11-15 10:58:50.045105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e8750) 00:15:03.342 [2024-11-15 10:58:50.045113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.342 [2024-11-15 10:58:50.045130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54c740, cid 0, qid 0 00:15:03.343 [2024-11-15 10:58:50.045179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.343 [2024-11-15 10:58:50.045186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.343 [2024-11-15 10:58:50.045190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54c740) on tqpair=0x4e8750 00:15:03.343 [2024-11-15 10:58:50.045199] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:03.343 [2024-11-15 10:58:50.045205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:03.343 [2024-11-15 10:58:50.045214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:03.343 [2024-11-15 10:58:50.045326] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:15:03.343 [2024-11-15 10:58:50.045332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:03.343 [2024-11-15 10:58:50.045342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e8750) 00:15:03.343 [2024-11-15 10:58:50.045358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.343 [2024-11-15 10:58:50.045378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54c740, cid 0, qid 0 00:15:03.343 [2024-11-15 10:58:50.045425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.343 [2024-11-15 10:58:50.045432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.343 [2024-11-15 10:58:50.045435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54c740) on tqpair=0x4e8750 00:15:03.343 [2024-11-15 10:58:50.045445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:03.343 [2024-11-15 10:58:50.045456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e8750) 00:15:03.343 [2024-11-15 10:58:50.045472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.343 [2024-11-15 10:58:50.045490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54c740, cid 0, qid 0 00:15:03.343 [2024-11-15 10:58:50.045530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.343 [2024-11-15 10:58:50.045552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.343 [2024-11-15 10:58:50.045556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54c740) on tqpair=0x4e8750 00:15:03.343 [2024-11-15 10:58:50.045566] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:03.343 [2024-11-15 10:58:50.045571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:03.343 [2024-11-15 10:58:50.045580] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:15:03.343 [2024-11-15 10:58:50.045615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:03.343 [2024-11-15 10:58:50.045629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e8750) 00:15:03.343 [2024-11-15 10:58:50.045643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.343 [2024-11-15 10:58:50.045664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54c740, cid 0, qid 0 00:15:03.343 [2024-11-15 10:58:50.045756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.343 [2024-11-15 10:58:50.045764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.343 [2024-11-15 10:58:50.045768] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045772] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e8750): datao=0, datal=4096, cccid=0 00:15:03.343 [2024-11-15 10:58:50.045778] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54c740) on tqpair(0x4e8750): expected_datao=0, payload_size=4096 00:15:03.343 [2024-11-15 10:58:50.045783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045792] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045797] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.343 [2024-11-15 10:58:50.045812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.343 [2024-11-15 10:58:50.045816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54c740) on tqpair=0x4e8750 00:15:03.343 [2024-11-15 10:58:50.045829] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:15:03.343 [2024-11-15 10:58:50.045835] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:15:03.343 [2024-11-15 10:58:50.045840] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:15:03.343 [2024-11-15 10:58:50.045846] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:15:03.343 [2024-11-15 10:58:50.045851] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:15:03.343 [2024-11-15 10:58:50.045857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:15:03.343 [2024-11-15 10:58:50.045871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:03.343 [2024-11-15 10:58:50.045882] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.045890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e8750) 00:15:03.343 [2024-11-15 10:58:50.045898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:03.343 [2024-11-15 10:58:50.045919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54c740, cid 0, qid 0 00:15:03.343 [2024-11-15 10:58:50.045970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.343 [2024-11-15 10:58:50.045977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.343 [2024-11-15 10:58:50.045996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.046000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54c740) on tqpair=0x4e8750 00:15:03.343 [2024-11-15 10:58:50.046009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.046013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.046017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e8750) 00:15:03.343 [2024-11-15 10:58:50.046024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.343 [2024-11-15 10:58:50.046030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.046035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.046039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4e8750) 00:15:03.343 [2024-11-15 10:58:50.046045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.343 [2024-11-15 10:58:50.046051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.046055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.046058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4e8750) 00:15:03.343 [2024-11-15 10:58:50.046064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.343 [2024-11-15 10:58:50.046070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.046074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.046078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.343 [2024-11-15 10:58:50.046084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.343 [2024-11-15 10:58:50.046089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:03.343 [2024-11-15 10:58:50.046103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:03.343 [2024-11-15 10:58:50.046111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.046115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e8750) 00:15:03.343 [2024-11-15 10:58:50.046122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.343 [2024-11-15 10:58:50.046142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54c740, cid 0, qid 0 00:15:03.343 [2024-11-15 10:58:50.046150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54c8c0, cid 1, qid 0 00:15:03.343 [2024-11-15 10:58:50.046155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54ca40, cid 2, qid 0 00:15:03.343 [2024-11-15 10:58:50.046160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.343 [2024-11-15 10:58:50.046164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cd40, cid 4, qid 0 00:15:03.343 [2024-11-15 10:58:50.046245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.343 [2024-11-15 10:58:50.046252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.343 [2024-11-15 10:58:50.046255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.343 [2024-11-15 10:58:50.046260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cd40) on tqpair=0x4e8750 00:15:03.344 [2024-11-15 10:58:50.046265] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:15:03.344 [2024-11-15 10:58:50.046271] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:15:03.344 [2024-11-15 10:58:50.046283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e8750) 00:15:03.344 [2024-11-15 10:58:50.046295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.344 [2024-11-15 10:58:50.046313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cd40, cid 4, qid 0 00:15:03.344 [2024-11-15 10:58:50.046368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.344 [2024-11-15 10:58:50.046374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.344 [2024-11-15 10:58:50.046378] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046382] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e8750): datao=0, datal=4096, cccid=4 00:15:03.344 [2024-11-15 10:58:50.046387] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54cd40) on tqpair(0x4e8750): expected_datao=0, payload_size=4096 00:15:03.344 [2024-11-15 10:58:50.046391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046399] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046403] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.344 [2024-11-15 10:58:50.046417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.344 [2024-11-15 10:58:50.046421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cd40) on tqpair=0x4e8750 00:15:03.344 [2024-11-15 10:58:50.046439] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:15:03.344 [2024-11-15 10:58:50.046471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e8750) 00:15:03.344 [2024-11-15 10:58:50.046485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.344 [2024-11-15 10:58:50.046493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4e8750) 00:15:03.344 [2024-11-15 10:58:50.046508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.344 [2024-11-15 10:58:50.046532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cd40, cid 4, qid 0 00:15:03.344 [2024-11-15 10:58:50.046552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cec0, cid 5, qid 0 00:15:03.344 [2024-11-15 10:58:50.046657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.344 [2024-11-15 10:58:50.046665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.344 [2024-11-15 10:58:50.046668] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046672] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e8750): datao=0, datal=1024, cccid=4 00:15:03.344 [2024-11-15 10:58:50.046677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54cd40) on tqpair(0x4e8750): expected_datao=0, payload_size=1024 00:15:03.344 [2024-11-15 10:58:50.046682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046689] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046693] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.344 [2024-11-15 10:58:50.046705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.344 [2024-11-15 10:58:50.046708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cec0) on tqpair=0x4e8750 00:15:03.344 [2024-11-15 10:58:50.046731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.344 [2024-11-15 10:58:50.046739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.344 [2024-11-15 10:58:50.046759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cd40) on tqpair=0x4e8750 00:15:03.344 [2024-11-15 10:58:50.046776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e8750) 00:15:03.344 [2024-11-15 10:58:50.046789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.344 [2024-11-15 10:58:50.046815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cd40, cid 4, qid 0 00:15:03.344 [2024-11-15 10:58:50.046883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.344 [2024-11-15 10:58:50.046890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.344 [2024-11-15 10:58:50.046894] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046898] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e8750): datao=0, datal=3072, cccid=4 00:15:03.344 [2024-11-15 10:58:50.046903] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54cd40) on tqpair(0x4e8750): expected_datao=0, payload_size=3072 00:15:03.344 [2024-11-15 10:58:50.046908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046915] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046919] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.344 [2024-11-15 10:58:50.046934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.344 [2024-11-15 10:58:50.046938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cd40) on tqpair=0x4e8750 00:15:03.344 [2024-11-15 10:58:50.046953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.046957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e8750) 00:15:03.344 [2024-11-15 10:58:50.046965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.344 [2024-11-15 10:58:50.046989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cd40, cid 4, qid 0 00:15:03.344 [2024-11-15 10:58:50.047050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.344 [2024-11-15 10:58:50.047057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.344 [2024-11-15 10:58:50.047060] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.047064] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e8750): datao=0, datal=8, cccid=4 00:15:03.344 [2024-11-15 10:58:50.047069] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x54cd40) on tqpair(0x4e8750): expected_datao=0, payload_size=8 00:15:03.344 [2024-11-15 10:58:50.047074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.047081] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.047085] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.047100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.344 [2024-11-15 10:58:50.047108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.344 [2024-11-15 10:58:50.047111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.344 [2024-11-15 10:58:50.047116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cd40) on tqpair=0x4e8750 00:15:03.344 ===================================================== 00:15:03.344 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:03.344 ===================================================== 00:15:03.344 Controller Capabilities/Features 00:15:03.344 ================================ 00:15:03.344 Vendor ID: 0000 00:15:03.344 Subsystem Vendor ID: 0000 00:15:03.344 Serial Number: .................... 00:15:03.344 Model Number: ........................................ 00:15:03.344 Firmware Version: 25.01 00:15:03.344 Recommended Arb Burst: 0 00:15:03.344 IEEE OUI Identifier: 00 00 00 00:15:03.344 Multi-path I/O 00:15:03.344 May have multiple subsystem ports: No 00:15:03.344 May have multiple controllers: No 00:15:03.344 Associated with SR-IOV VF: No 00:15:03.344 Max Data Transfer Size: 131072 00:15:03.344 Max Number of Namespaces: 0 00:15:03.344 Max Number of I/O Queues: 1024 00:15:03.344 NVMe Specification Version (VS): 1.3 00:15:03.344 NVMe Specification Version (Identify): 1.3 00:15:03.344 Maximum Queue Entries: 128 00:15:03.344 Contiguous Queues Required: Yes 00:15:03.344 Arbitration Mechanisms Supported 00:15:03.344 Weighted Round Robin: Not Supported 00:15:03.344 Vendor Specific: Not Supported 00:15:03.344 Reset Timeout: 15000 ms 00:15:03.344 Doorbell Stride: 4 bytes 00:15:03.344 NVM Subsystem Reset: Not Supported 00:15:03.344 Command Sets Supported 00:15:03.344 NVM Command Set: Supported 00:15:03.344 Boot Partition: Not Supported 00:15:03.345 Memory Page Size Minimum: 4096 bytes 00:15:03.345 Memory Page Size Maximum: 4096 bytes 00:15:03.345 Persistent Memory Region: Not Supported 00:15:03.345 Optional Asynchronous Events Supported 00:15:03.345 Namespace Attribute Notices: Not Supported 00:15:03.345 Firmware Activation Notices: Not Supported 00:15:03.345 ANA Change Notices: Not Supported 00:15:03.345 PLE Aggregate Log Change Notices: Not Supported 00:15:03.345 LBA Status Info Alert Notices: Not Supported 00:15:03.345 EGE Aggregate Log Change Notices: Not Supported 00:15:03.345 Normal NVM Subsystem Shutdown event: Not Supported 00:15:03.345 Zone Descriptor Change Notices: Not Supported 00:15:03.345 Discovery Log Change Notices: Supported 00:15:03.345 Controller Attributes 00:15:03.345 128-bit Host Identifier: Not Supported 00:15:03.345 Non-Operational Permissive Mode: Not Supported 00:15:03.345 NVM Sets: Not Supported 00:15:03.345 Read Recovery Levels: Not Supported 00:15:03.345 Endurance Groups: Not Supported 00:15:03.345 Predictable Latency Mode: Not Supported 00:15:03.345 Traffic Based Keep ALive: Not Supported 00:15:03.345 Namespace Granularity: Not Supported 00:15:03.345 SQ Associations: Not Supported 00:15:03.345 UUID List: Not Supported 00:15:03.345 Multi-Domain Subsystem: Not Supported 00:15:03.345 Fixed Capacity Management: Not Supported 00:15:03.345 Variable Capacity Management: Not Supported 00:15:03.345 Delete Endurance Group: Not Supported 00:15:03.345 Delete NVM Set: Not Supported 00:15:03.345 Extended LBA Formats Supported: Not Supported 00:15:03.345 Flexible Data Placement Supported: Not Supported 00:15:03.345 00:15:03.345 Controller Memory Buffer Support 00:15:03.345 ================================ 00:15:03.345 Supported: No 00:15:03.345 00:15:03.345 Persistent Memory Region Support 00:15:03.345 ================================ 00:15:03.345 Supported: No 00:15:03.345 00:15:03.345 Admin Command Set Attributes 00:15:03.345 ============================ 00:15:03.345 Security Send/Receive: Not Supported 00:15:03.345 Format NVM: Not Supported 00:15:03.345 Firmware Activate/Download: Not Supported 00:15:03.345 Namespace Management: Not Supported 00:15:03.345 Device Self-Test: Not Supported 00:15:03.345 Directives: Not Supported 00:15:03.345 NVMe-MI: Not Supported 00:15:03.345 Virtualization Management: Not Supported 00:15:03.345 Doorbell Buffer Config: Not Supported 00:15:03.345 Get LBA Status Capability: Not Supported 00:15:03.345 Command & Feature Lockdown Capability: Not Supported 00:15:03.345 Abort Command Limit: 1 00:15:03.345 Async Event Request Limit: 4 00:15:03.345 Number of Firmware Slots: N/A 00:15:03.345 Firmware Slot 1 Read-Only: N/A 00:15:03.345 Firmware Activation Without Reset: N/A 00:15:03.345 Multiple Update Detection Support: N/A 00:15:03.345 Firmware Update Granularity: No Information Provided 00:15:03.345 Per-Namespace SMART Log: No 00:15:03.345 Asymmetric Namespace Access Log Page: Not Supported 00:15:03.345 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:03.345 Command Effects Log Page: Not Supported 00:15:03.345 Get Log Page Extended Data: Supported 00:15:03.345 Telemetry Log Pages: Not Supported 00:15:03.345 Persistent Event Log Pages: Not Supported 00:15:03.345 Supported Log Pages Log Page: May Support 00:15:03.345 Commands Supported & Effects Log Page: Not Supported 00:15:03.345 Feature Identifiers & Effects Log Page:May Support 00:15:03.345 NVMe-MI Commands & Effects Log Page: May Support 00:15:03.345 Data Area 4 for Telemetry Log: Not Supported 00:15:03.345 Error Log Page Entries Supported: 128 00:15:03.345 Keep Alive: Not Supported 00:15:03.345 00:15:03.345 NVM Command Set Attributes 00:15:03.345 ========================== 00:15:03.345 Submission Queue Entry Size 00:15:03.345 Max: 1 00:15:03.345 Min: 1 00:15:03.345 Completion Queue Entry Size 00:15:03.345 Max: 1 00:15:03.345 Min: 1 00:15:03.345 Number of Namespaces: 0 00:15:03.345 Compare Command: Not Supported 00:15:03.345 Write Uncorrectable Command: Not Supported 00:15:03.345 Dataset Management Command: Not Supported 00:15:03.345 Write Zeroes Command: Not Supported 00:15:03.345 Set Features Save Field: Not Supported 00:15:03.345 Reservations: Not Supported 00:15:03.345 Timestamp: Not Supported 00:15:03.345 Copy: Not Supported 00:15:03.345 Volatile Write Cache: Not Present 00:15:03.345 Atomic Write Unit (Normal): 1 00:15:03.345 Atomic Write Unit (PFail): 1 00:15:03.345 Atomic Compare & Write Unit: 1 00:15:03.345 Fused Compare & Write: Supported 00:15:03.345 Scatter-Gather List 00:15:03.345 SGL Command Set: Supported 00:15:03.345 SGL Keyed: Supported 00:15:03.345 SGL Bit Bucket Descriptor: Not Supported 00:15:03.345 SGL Metadata Pointer: Not Supported 00:15:03.345 Oversized SGL: Not Supported 00:15:03.345 SGL Metadata Address: Not Supported 00:15:03.345 SGL Offset: Supported 00:15:03.345 Transport SGL Data Block: Not Supported 00:15:03.345 Replay Protected Memory Block: Not Supported 00:15:03.345 00:15:03.345 Firmware Slot Information 00:15:03.345 ========================= 00:15:03.345 Active slot: 0 00:15:03.345 00:15:03.345 00:15:03.345 Error Log 00:15:03.345 ========= 00:15:03.345 00:15:03.345 Active Namespaces 00:15:03.345 ================= 00:15:03.345 Discovery Log Page 00:15:03.345 ================== 00:15:03.345 Generation Counter: 2 00:15:03.345 Number of Records: 2 00:15:03.345 Record Format: 0 00:15:03.345 00:15:03.345 Discovery Log Entry 0 00:15:03.345 ---------------------- 00:15:03.345 Transport Type: 3 (TCP) 00:15:03.345 Address Family: 1 (IPv4) 00:15:03.345 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:03.345 Entry Flags: 00:15:03.345 Duplicate Returned Information: 1 00:15:03.345 Explicit Persistent Connection Support for Discovery: 1 00:15:03.345 Transport Requirements: 00:15:03.345 Secure Channel: Not Required 00:15:03.345 Port ID: 0 (0x0000) 00:15:03.345 Controller ID: 65535 (0xffff) 00:15:03.345 Admin Max SQ Size: 128 00:15:03.345 Transport Service Identifier: 4420 00:15:03.345 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:03.345 Transport Address: 10.0.0.3 00:15:03.345 Discovery Log Entry 1 00:15:03.345 ---------------------- 00:15:03.345 Transport Type: 3 (TCP) 00:15:03.345 Address Family: 1 (IPv4) 00:15:03.345 Subsystem Type: 2 (NVM Subsystem) 00:15:03.345 Entry Flags: 00:15:03.345 Duplicate Returned Information: 0 00:15:03.345 Explicit Persistent Connection Support for Discovery: 0 00:15:03.345 Transport Requirements: 00:15:03.345 Secure Channel: Not Required 00:15:03.345 Port ID: 0 (0x0000) 00:15:03.345 Controller ID: 65535 (0xffff) 00:15:03.345 Admin Max SQ Size: 128 00:15:03.345 Transport Service Identifier: 4420 00:15:03.345 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:03.345 Transport Address: 10.0.0.3 [2024-11-15 10:58:50.047209] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:15:03.345 [2024-11-15 10:58:50.047223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54c740) on tqpair=0x4e8750 00:15:03.345 [2024-11-15 10:58:50.047231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.345 [2024-11-15 10:58:50.047237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54c8c0) on tqpair=0x4e8750 00:15:03.345 [2024-11-15 10:58:50.047242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.345 [2024-11-15 10:58:50.047247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54ca40) on tqpair=0x4e8750 00:15:03.345 [2024-11-15 10:58:50.047252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.345 [2024-11-15 10:58:50.047257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.345 [2024-11-15 10:58:50.047262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.345 [2024-11-15 10:58:50.047272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.345 [2024-11-15 10:58:50.047276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.345 [2024-11-15 10:58:50.047280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.346 [2024-11-15 10:58:50.047288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.346 [2024-11-15 10:58:50.047311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.346 [2024-11-15 10:58:50.047361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.346 [2024-11-15 10:58:50.047368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.346 [2024-11-15 10:58:50.047372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.346 [2024-11-15 10:58:50.047385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.346 [2024-11-15 10:58:50.047401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.346 [2024-11-15 10:58:50.047423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.346 [2024-11-15 10:58:50.047490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.346 [2024-11-15 10:58:50.047497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.346 [2024-11-15 10:58:50.047500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.346 [2024-11-15 10:58:50.047510] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:15:03.346 [2024-11-15 10:58:50.047516] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:15:03.346 [2024-11-15 10:58:50.047526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.346 [2024-11-15 10:58:50.047555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.346 [2024-11-15 10:58:50.047577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.346 [2024-11-15 10:58:50.047627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.346 [2024-11-15 10:58:50.047634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.346 [2024-11-15 10:58:50.047638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.346 [2024-11-15 10:58:50.047654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.346 [2024-11-15 10:58:50.047686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.346 [2024-11-15 10:58:50.047704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.346 [2024-11-15 10:58:50.047751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.346 [2024-11-15 10:58:50.047785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.346 [2024-11-15 10:58:50.047790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.346 [2024-11-15 10:58:50.047806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.346 [2024-11-15 10:58:50.047823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.346 [2024-11-15 10:58:50.047842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.346 [2024-11-15 10:58:50.047900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.346 [2024-11-15 10:58:50.047907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.346 [2024-11-15 10:58:50.047911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.346 [2024-11-15 10:58:50.047926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.047935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.346 [2024-11-15 10:58:50.047942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.346 [2024-11-15 10:58:50.047960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.346 [2024-11-15 10:58:50.048003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.346 [2024-11-15 10:58:50.048010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.346 [2024-11-15 10:58:50.048013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.346 [2024-11-15 10:58:50.048028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.346 [2024-11-15 10:58:50.048045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.346 [2024-11-15 10:58:50.048062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.346 [2024-11-15 10:58:50.048122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.346 [2024-11-15 10:58:50.048128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.346 [2024-11-15 10:58:50.048132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.346 [2024-11-15 10:58:50.048146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.346 [2024-11-15 10:58:50.048162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.346 [2024-11-15 10:58:50.048178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.346 [2024-11-15 10:58:50.048230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.346 [2024-11-15 10:58:50.048237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.346 [2024-11-15 10:58:50.048240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.346 [2024-11-15 10:58:50.048255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.346 [2024-11-15 10:58:50.048271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.346 [2024-11-15 10:58:50.048287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.346 [2024-11-15 10:58:50.048352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.346 [2024-11-15 10:58:50.048359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.346 [2024-11-15 10:58:50.048362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.346 [2024-11-15 10:58:50.048378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.346 [2024-11-15 10:58:50.048395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.346 [2024-11-15 10:58:50.048412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.346 [2024-11-15 10:58:50.048457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.346 [2024-11-15 10:58:50.048464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.346 [2024-11-15 10:58:50.048467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.346 [2024-11-15 10:58:50.048482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.048491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.346 [2024-11-15 10:58:50.048499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.346 [2024-11-15 10:58:50.048516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.346 [2024-11-15 10:58:50.052644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.346 [2024-11-15 10:58:50.052666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.346 [2024-11-15 10:58:50.052687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.346 [2024-11-15 10:58:50.052692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.346 [2024-11-15 10:58:50.052707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.347 [2024-11-15 10:58:50.052712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.347 [2024-11-15 10:58:50.052716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e8750) 00:15:03.347 [2024-11-15 10:58:50.052725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.347 [2024-11-15 10:58:50.052749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x54cbc0, cid 3, qid 0 00:15:03.347 [2024-11-15 10:58:50.052801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.347 [2024-11-15 10:58:50.052808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.347 [2024-11-15 10:58:50.052811] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.347 [2024-11-15 10:58:50.052816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x54cbc0) on tqpair=0x4e8750 00:15:03.347 [2024-11-15 10:58:50.052824] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:15:03.347 00:15:03.347 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:03.347 [2024-11-15 10:58:50.096932] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:03.347 [2024-11-15 10:58:50.096992] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73933 ] 00:15:03.612 [2024-11-15 10:58:50.252808] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:15:03.612 [2024-11-15 10:58:50.252878] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:03.612 [2024-11-15 10:58:50.252885] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:03.612 [2024-11-15 10:58:50.252896] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:03.612 [2024-11-15 10:58:50.252904] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:03.612 [2024-11-15 10:58:50.253219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:15:03.612 [2024-11-15 10:58:50.253287] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2420750 0 00:15:03.612 [2024-11-15 10:58:50.260609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:03.612 [2024-11-15 10:58:50.260633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:03.612 [2024-11-15 10:58:50.260656] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:03.612 [2024-11-15 10:58:50.260660] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:03.612 [2024-11-15 10:58:50.260689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.260696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.260701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2420750) 00:15:03.612 [2024-11-15 10:58:50.260712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:03.612 [2024-11-15 10:58:50.260744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484740, cid 0, qid 0 00:15:03.612 [2024-11-15 10:58:50.268606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.612 [2024-11-15 10:58:50.268627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.612 [2024-11-15 10:58:50.268648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.268653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484740) on tqpair=0x2420750 00:15:03.612 [2024-11-15 10:58:50.268664] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:03.612 [2024-11-15 10:58:50.268672] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:15:03.612 [2024-11-15 10:58:50.268678] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:15:03.612 [2024-11-15 10:58:50.268693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.268699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.268703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2420750) 00:15:03.612 [2024-11-15 10:58:50.268712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.612 [2024-11-15 10:58:50.268739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484740, cid 0, qid 0 00:15:03.612 [2024-11-15 10:58:50.268802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.612 [2024-11-15 10:58:50.268809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.612 [2024-11-15 10:58:50.268813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.268816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484740) on tqpair=0x2420750 00:15:03.612 [2024-11-15 10:58:50.268822] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:15:03.612 [2024-11-15 10:58:50.268830] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:15:03.612 [2024-11-15 10:58:50.268837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.268842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.268845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2420750) 00:15:03.612 [2024-11-15 10:58:50.268853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.612 [2024-11-15 10:58:50.268872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484740, cid 0, qid 0 00:15:03.612 [2024-11-15 10:58:50.268940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.612 [2024-11-15 10:58:50.268947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.612 [2024-11-15 10:58:50.268950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.268954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484740) on tqpair=0x2420750 00:15:03.612 [2024-11-15 10:58:50.268960] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:15:03.612 [2024-11-15 10:58:50.268969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:03.612 [2024-11-15 10:58:50.268977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.268981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.268990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2420750) 00:15:03.612 [2024-11-15 10:58:50.268997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.612 [2024-11-15 10:58:50.269016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484740, cid 0, qid 0 00:15:03.612 [2024-11-15 10:58:50.269066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.612 [2024-11-15 10:58:50.269074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.612 [2024-11-15 10:58:50.269077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.269081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484740) on tqpair=0x2420750 00:15:03.612 [2024-11-15 10:58:50.269087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:03.612 [2024-11-15 10:58:50.269097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.269102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.269106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2420750) 00:15:03.612 [2024-11-15 10:58:50.269113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.612 [2024-11-15 10:58:50.269132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484740, cid 0, qid 0 00:15:03.612 [2024-11-15 10:58:50.269189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.612 [2024-11-15 10:58:50.269196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.612 [2024-11-15 10:58:50.269200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.612 [2024-11-15 10:58:50.269203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484740) on tqpair=0x2420750 00:15:03.612 [2024-11-15 10:58:50.269209] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:03.612 [2024-11-15 10:58:50.269214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:03.612 [2024-11-15 10:58:50.269222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:03.612 [2024-11-15 10:58:50.269333] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:15:03.613 [2024-11-15 10:58:50.269339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:03.613 [2024-11-15 10:58:50.269349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2420750) 00:15:03.613 [2024-11-15 10:58:50.269365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.613 [2024-11-15 10:58:50.269386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484740, cid 0, qid 0 00:15:03.613 [2024-11-15 10:58:50.269432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.613 [2024-11-15 10:58:50.269440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.613 [2024-11-15 10:58:50.269444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484740) on tqpair=0x2420750 00:15:03.613 [2024-11-15 10:58:50.269454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:03.613 [2024-11-15 10:58:50.269465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2420750) 00:15:03.613 [2024-11-15 10:58:50.269481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.613 [2024-11-15 10:58:50.269500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484740, cid 0, qid 0 00:15:03.613 [2024-11-15 10:58:50.269570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.613 [2024-11-15 10:58:50.269578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.613 [2024-11-15 10:58:50.269581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484740) on tqpair=0x2420750 00:15:03.613 [2024-11-15 10:58:50.269605] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:03.613 [2024-11-15 10:58:50.269610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:03.613 [2024-11-15 10:58:50.269620] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:15:03.613 [2024-11-15 10:58:50.269636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:03.613 [2024-11-15 10:58:50.269647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2420750) 00:15:03.613 [2024-11-15 10:58:50.269675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.613 [2024-11-15 10:58:50.269697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484740, cid 0, qid 0 00:15:03.613 [2024-11-15 10:58:50.269805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.613 [2024-11-15 10:58:50.269813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.613 [2024-11-15 10:58:50.269817] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269821] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2420750): datao=0, datal=4096, cccid=0 00:15:03.613 [2024-11-15 10:58:50.269825] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2484740) on tqpair(0x2420750): expected_datao=0, payload_size=4096 00:15:03.613 [2024-11-15 10:58:50.269830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269838] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269843] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.613 [2024-11-15 10:58:50.269858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.613 [2024-11-15 10:58:50.269861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484740) on tqpair=0x2420750 00:15:03.613 [2024-11-15 10:58:50.269874] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:15:03.613 [2024-11-15 10:58:50.269880] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:15:03.613 [2024-11-15 10:58:50.269884] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:15:03.613 [2024-11-15 10:58:50.269890] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:15:03.613 [2024-11-15 10:58:50.269895] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:15:03.613 [2024-11-15 10:58:50.269900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:15:03.613 [2024-11-15 10:58:50.269914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:03.613 [2024-11-15 10:58:50.269923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.269931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2420750) 00:15:03.613 [2024-11-15 10:58:50.269939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:03.613 [2024-11-15 10:58:50.269960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484740, cid 0, qid 0 00:15:03.613 [2024-11-15 10:58:50.270017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.613 [2024-11-15 10:58:50.270024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.613 [2024-11-15 10:58:50.270028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.270032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484740) on tqpair=0x2420750 00:15:03.613 [2024-11-15 10:58:50.270040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.270044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.270048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2420750) 00:15:03.613 [2024-11-15 10:58:50.270054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.613 [2024-11-15 10:58:50.270061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.270065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.270069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2420750) 00:15:03.613 [2024-11-15 10:58:50.270075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.613 [2024-11-15 10:58:50.270081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.270085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.270089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2420750) 00:15:03.613 [2024-11-15 10:58:50.270095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.613 [2024-11-15 10:58:50.270101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.270105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.270109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.613 [2024-11-15 10:58:50.270115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.613 [2024-11-15 10:58:50.270120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:03.613 [2024-11-15 10:58:50.270133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:03.613 [2024-11-15 10:58:50.270141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.613 [2024-11-15 10:58:50.270145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2420750) 00:15:03.613 [2024-11-15 10:58:50.270152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.613 [2024-11-15 10:58:50.270174] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484740, cid 0, qid 0 00:15:03.613 [2024-11-15 10:58:50.270181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24848c0, cid 1, qid 0 00:15:03.613 [2024-11-15 10:58:50.270186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484a40, cid 2, qid 0 00:15:03.614 [2024-11-15 10:58:50.270191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.614 [2024-11-15 10:58:50.270195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484d40, cid 4, qid 0 00:15:03.614 [2024-11-15 10:58:50.270275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.614 [2024-11-15 10:58:50.270282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.614 [2024-11-15 10:58:50.270286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484d40) on tqpair=0x2420750 00:15:03.614 [2024-11-15 10:58:50.270296] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:15:03.614 [2024-11-15 10:58:50.270302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.270311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.270322] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.270329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2420750) 00:15:03.614 [2024-11-15 10:58:50.270345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:03.614 [2024-11-15 10:58:50.270365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484d40, cid 4, qid 0 00:15:03.614 [2024-11-15 10:58:50.270421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.614 [2024-11-15 10:58:50.270428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.614 [2024-11-15 10:58:50.270432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484d40) on tqpair=0x2420750 00:15:03.614 [2024-11-15 10:58:50.270498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.270511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.270520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2420750) 00:15:03.614 [2024-11-15 10:58:50.270531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.614 [2024-11-15 10:58:50.270563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484d40, cid 4, qid 0 00:15:03.614 [2024-11-15 10:58:50.270634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.614 [2024-11-15 10:58:50.270641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.614 [2024-11-15 10:58:50.270645] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270649] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2420750): datao=0, datal=4096, cccid=4 00:15:03.614 [2024-11-15 10:58:50.270654] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2484d40) on tqpair(0x2420750): expected_datao=0, payload_size=4096 00:15:03.614 [2024-11-15 10:58:50.270659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270666] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270670] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.614 [2024-11-15 10:58:50.270686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.614 [2024-11-15 10:58:50.270689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484d40) on tqpair=0x2420750 00:15:03.614 [2024-11-15 10:58:50.270709] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:15:03.614 [2024-11-15 10:58:50.270720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.270731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.270739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2420750) 00:15:03.614 [2024-11-15 10:58:50.270751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.614 [2024-11-15 10:58:50.270773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484d40, cid 4, qid 0 00:15:03.614 [2024-11-15 10:58:50.270879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.614 [2024-11-15 10:58:50.270886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.614 [2024-11-15 10:58:50.270890] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270894] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2420750): datao=0, datal=4096, cccid=4 00:15:03.614 [2024-11-15 10:58:50.270898] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2484d40) on tqpair(0x2420750): expected_datao=0, payload_size=4096 00:15:03.614 [2024-11-15 10:58:50.270903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270910] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270914] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.614 [2024-11-15 10:58:50.270929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.614 [2024-11-15 10:58:50.270932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484d40) on tqpair=0x2420750 00:15:03.614 [2024-11-15 10:58:50.270955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.270967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.270976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.270980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2420750) 00:15:03.614 [2024-11-15 10:58:50.270987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.614 [2024-11-15 10:58:50.271008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484d40, cid 4, qid 0 00:15:03.614 [2024-11-15 10:58:50.271067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.614 [2024-11-15 10:58:50.271074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.614 [2024-11-15 10:58:50.271078] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.271082] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2420750): datao=0, datal=4096, cccid=4 00:15:03.614 [2024-11-15 10:58:50.271086] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2484d40) on tqpair(0x2420750): expected_datao=0, payload_size=4096 00:15:03.614 [2024-11-15 10:58:50.271090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.271097] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.271102] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.271110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.614 [2024-11-15 10:58:50.271116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.614 [2024-11-15 10:58:50.271120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.614 [2024-11-15 10:58:50.271124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484d40) on tqpair=0x2420750 00:15:03.614 [2024-11-15 10:58:50.271133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.271142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.271153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.271160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.271165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.271171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.271176] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:15:03.614 [2024-11-15 10:58:50.271181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:15:03.614 [2024-11-15 10:58:50.271187] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:15:03.614 [2024-11-15 10:58:50.271202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2420750) 00:15:03.615 [2024-11-15 10:58:50.271215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.615 [2024-11-15 10:58:50.271222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2420750) 00:15:03.615 [2024-11-15 10:58:50.271236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.615 [2024-11-15 10:58:50.271262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484d40, cid 4, qid 0 00:15:03.615 [2024-11-15 10:58:50.271270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484ec0, cid 5, qid 0 00:15:03.615 [2024-11-15 10:58:50.271341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.615 [2024-11-15 10:58:50.271348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.615 [2024-11-15 10:58:50.271351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484d40) on tqpair=0x2420750 00:15:03.615 [2024-11-15 10:58:50.271362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.615 [2024-11-15 10:58:50.271368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.615 [2024-11-15 10:58:50.271371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484ec0) on tqpair=0x2420750 00:15:03.615 [2024-11-15 10:58:50.271385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2420750) 00:15:03.615 [2024-11-15 10:58:50.271397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.615 [2024-11-15 10:58:50.271416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484ec0, cid 5, qid 0 00:15:03.615 [2024-11-15 10:58:50.271472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.615 [2024-11-15 10:58:50.271479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.615 [2024-11-15 10:58:50.271483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484ec0) on tqpair=0x2420750 00:15:03.615 [2024-11-15 10:58:50.271498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2420750) 00:15:03.615 [2024-11-15 10:58:50.271509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.615 [2024-11-15 10:58:50.271557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484ec0, cid 5, qid 0 00:15:03.615 [2024-11-15 10:58:50.271608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.615 [2024-11-15 10:58:50.271616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.615 [2024-11-15 10:58:50.271619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484ec0) on tqpair=0x2420750 00:15:03.615 [2024-11-15 10:58:50.271635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2420750) 00:15:03.615 [2024-11-15 10:58:50.271647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.615 [2024-11-15 10:58:50.271667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484ec0, cid 5, qid 0 00:15:03.615 [2024-11-15 10:58:50.271717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.615 [2024-11-15 10:58:50.271725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.615 [2024-11-15 10:58:50.271728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484ec0) on tqpair=0x2420750 00:15:03.615 [2024-11-15 10:58:50.271752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2420750) 00:15:03.615 [2024-11-15 10:58:50.271795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.615 [2024-11-15 10:58:50.271804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2420750) 00:15:03.615 [2024-11-15 10:58:50.271816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.615 [2024-11-15 10:58:50.271824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2420750) 00:15:03.615 [2024-11-15 10:58:50.271835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.615 [2024-11-15 10:58:50.271843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.271848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2420750) 00:15:03.615 [2024-11-15 10:58:50.271854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.615 [2024-11-15 10:58:50.271878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484ec0, cid 5, qid 0 00:15:03.615 [2024-11-15 10:58:50.271885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484d40, cid 4, qid 0 00:15:03.615 [2024-11-15 10:58:50.271890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2485040, cid 6, qid 0 00:15:03.615 [2024-11-15 10:58:50.271895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24851c0, cid 7, qid 0 00:15:03.615 [2024-11-15 10:58:50.272024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.615 [2024-11-15 10:58:50.272032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.615 [2024-11-15 10:58:50.272036] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272040] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2420750): datao=0, datal=8192, cccid=5 00:15:03.615 [2024-11-15 10:58:50.272045] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2484ec0) on tqpair(0x2420750): expected_datao=0, payload_size=8192 00:15:03.615 [2024-11-15 10:58:50.272050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272068] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272073] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.615 [2024-11-15 10:58:50.272100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.615 [2024-11-15 10:58:50.272119] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272122] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2420750): datao=0, datal=512, cccid=4 00:15:03.615 [2024-11-15 10:58:50.272127] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2484d40) on tqpair(0x2420750): expected_datao=0, payload_size=512 00:15:03.615 [2024-11-15 10:58:50.272131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272138] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272142] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.615 [2024-11-15 10:58:50.272153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.615 [2024-11-15 10:58:50.272157] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272160] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2420750): datao=0, datal=512, cccid=6 00:15:03.615 [2024-11-15 10:58:50.272164] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2485040) on tqpair(0x2420750): expected_datao=0, payload_size=512 00:15:03.615 [2024-11-15 10:58:50.272169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272175] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272179] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:03.615 [2024-11-15 10:58:50.272190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:03.615 [2024-11-15 10:58:50.272193] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272197] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2420750): datao=0, datal=4096, cccid=7 00:15:03.615 [2024-11-15 10:58:50.272201] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24851c0) on tqpair(0x2420750): expected_datao=0, payload_size=4096 00:15:03.615 [2024-11-15 10:58:50.272205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272211] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272215] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:03.615 [2024-11-15 10:58:50.272224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.615 [2024-11-15 10:58:50.272230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.615 [2024-11-15 10:58:50.272234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.616 [2024-11-15 10:58:50.272238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484ec0) on tqpair=0x2420750 00:15:03.616 [2024-11-15 10:58:50.272254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.616 [2024-11-15 10:58:50.272260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.616 [2024-11-15 10:58:50.272264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.616 [2024-11-15 10:58:50.272267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484d40) on tqpair=0x2420750 00:15:03.616 [2024-11-15 10:58:50.272279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.616 [2024-11-15 10:58:50.272286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.616 [2024-11-15 10:58:50.272289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.616 [2024-11-15 10:58:50.272293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2485040) on tqpair=0x2420750 00:15:03.616 [2024-11-15 10:58:50.272300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.616 [2024-11-15 10:58:50.272306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.616 [2024-11-15 10:58:50.272309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.616 [2024-11-15 10:58:50.272313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24851c0) on tqpair=0x2420750 00:15:03.616 ===================================================== 00:15:03.616 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.616 ===================================================== 00:15:03.616 Controller Capabilities/Features 00:15:03.616 ================================ 00:15:03.616 Vendor ID: 8086 00:15:03.616 Subsystem Vendor ID: 8086 00:15:03.616 Serial Number: SPDK00000000000001 00:15:03.616 Model Number: SPDK bdev Controller 00:15:03.616 Firmware Version: 25.01 00:15:03.616 Recommended Arb Burst: 6 00:15:03.616 IEEE OUI Identifier: e4 d2 5c 00:15:03.616 Multi-path I/O 00:15:03.616 May have multiple subsystem ports: Yes 00:15:03.616 May have multiple controllers: Yes 00:15:03.616 Associated with SR-IOV VF: No 00:15:03.616 Max Data Transfer Size: 131072 00:15:03.616 Max Number of Namespaces: 32 00:15:03.616 Max Number of I/O Queues: 127 00:15:03.616 NVMe Specification Version (VS): 1.3 00:15:03.616 NVMe Specification Version (Identify): 1.3 00:15:03.616 Maximum Queue Entries: 128 00:15:03.616 Contiguous Queues Required: Yes 00:15:03.616 Arbitration Mechanisms Supported 00:15:03.616 Weighted Round Robin: Not Supported 00:15:03.616 Vendor Specific: Not Supported 00:15:03.616 Reset Timeout: 15000 ms 00:15:03.616 Doorbell Stride: 4 bytes 00:15:03.616 NVM Subsystem Reset: Not Supported 00:15:03.616 Command Sets Supported 00:15:03.616 NVM Command Set: Supported 00:15:03.616 Boot Partition: Not Supported 00:15:03.616 Memory Page Size Minimum: 4096 bytes 00:15:03.616 Memory Page Size Maximum: 4096 bytes 00:15:03.616 Persistent Memory Region: Not Supported 00:15:03.616 Optional Asynchronous Events Supported 00:15:03.616 Namespace Attribute Notices: Supported 00:15:03.616 Firmware Activation Notices: Not Supported 00:15:03.616 ANA Change Notices: Not Supported 00:15:03.616 PLE Aggregate Log Change Notices: Not Supported 00:15:03.616 LBA Status Info Alert Notices: Not Supported 00:15:03.616 EGE Aggregate Log Change Notices: Not Supported 00:15:03.616 Normal NVM Subsystem Shutdown event: Not Supported 00:15:03.616 Zone Descriptor Change Notices: Not Supported 00:15:03.616 Discovery Log Change Notices: Not Supported 00:15:03.616 Controller Attributes 00:15:03.616 128-bit Host Identifier: Supported 00:15:03.616 Non-Operational Permissive Mode: Not Supported 00:15:03.616 NVM Sets: Not Supported 00:15:03.616 Read Recovery Levels: Not Supported 00:15:03.616 Endurance Groups: Not Supported 00:15:03.616 Predictable Latency Mode: Not Supported 00:15:03.616 Traffic Based Keep ALive: Not Supported 00:15:03.616 Namespace Granularity: Not Supported 00:15:03.616 SQ Associations: Not Supported 00:15:03.616 UUID List: Not Supported 00:15:03.616 Multi-Domain Subsystem: Not Supported 00:15:03.616 Fixed Capacity Management: Not Supported 00:15:03.616 Variable Capacity Management: Not Supported 00:15:03.616 Delete Endurance Group: Not Supported 00:15:03.616 Delete NVM Set: Not Supported 00:15:03.616 Extended LBA Formats Supported: Not Supported 00:15:03.616 Flexible Data Placement Supported: Not Supported 00:15:03.616 00:15:03.616 Controller Memory Buffer Support 00:15:03.616 ================================ 00:15:03.616 Supported: No 00:15:03.616 00:15:03.616 Persistent Memory Region Support 00:15:03.616 ================================ 00:15:03.616 Supported: No 00:15:03.616 00:15:03.616 Admin Command Set Attributes 00:15:03.616 ============================ 00:15:03.616 Security Send/Receive: Not Supported 00:15:03.616 Format NVM: Not Supported 00:15:03.616 Firmware Activate/Download: Not Supported 00:15:03.616 Namespace Management: Not Supported 00:15:03.616 Device Self-Test: Not Supported 00:15:03.616 Directives: Not Supported 00:15:03.616 NVMe-MI: Not Supported 00:15:03.616 Virtualization Management: Not Supported 00:15:03.616 Doorbell Buffer Config: Not Supported 00:15:03.616 Get LBA Status Capability: Not Supported 00:15:03.616 Command & Feature Lockdown Capability: Not Supported 00:15:03.616 Abort Command Limit: 4 00:15:03.616 Async Event Request Limit: 4 00:15:03.616 Number of Firmware Slots: N/A 00:15:03.616 Firmware Slot 1 Read-Only: N/A 00:15:03.616 Firmware Activation Without Reset: N/A 00:15:03.616 Multiple Update Detection Support: N/A 00:15:03.616 Firmware Update Granularity: No Information Provided 00:15:03.616 Per-Namespace SMART Log: No 00:15:03.616 Asymmetric Namespace Access Log Page: Not Supported 00:15:03.616 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:03.616 Command Effects Log Page: Supported 00:15:03.616 Get Log Page Extended Data: Supported 00:15:03.616 Telemetry Log Pages: Not Supported 00:15:03.616 Persistent Event Log Pages: Not Supported 00:15:03.616 Supported Log Pages Log Page: May Support 00:15:03.616 Commands Supported & Effects Log Page: Not Supported 00:15:03.616 Feature Identifiers & Effects Log Page:May Support 00:15:03.616 NVMe-MI Commands & Effects Log Page: May Support 00:15:03.616 Data Area 4 for Telemetry Log: Not Supported 00:15:03.616 Error Log Page Entries Supported: 128 00:15:03.616 Keep Alive: Supported 00:15:03.616 Keep Alive Granularity: 10000 ms 00:15:03.616 00:15:03.616 NVM Command Set Attributes 00:15:03.616 ========================== 00:15:03.616 Submission Queue Entry Size 00:15:03.616 Max: 64 00:15:03.616 Min: 64 00:15:03.616 Completion Queue Entry Size 00:15:03.616 Max: 16 00:15:03.616 Min: 16 00:15:03.616 Number of Namespaces: 32 00:15:03.616 Compare Command: Supported 00:15:03.616 Write Uncorrectable Command: Not Supported 00:15:03.616 Dataset Management Command: Supported 00:15:03.616 Write Zeroes Command: Supported 00:15:03.616 Set Features Save Field: Not Supported 00:15:03.616 Reservations: Supported 00:15:03.616 Timestamp: Not Supported 00:15:03.616 Copy: Supported 00:15:03.616 Volatile Write Cache: Present 00:15:03.616 Atomic Write Unit (Normal): 1 00:15:03.616 Atomic Write Unit (PFail): 1 00:15:03.616 Atomic Compare & Write Unit: 1 00:15:03.616 Fused Compare & Write: Supported 00:15:03.616 Scatter-Gather List 00:15:03.616 SGL Command Set: Supported 00:15:03.616 SGL Keyed: Supported 00:15:03.616 SGL Bit Bucket Descriptor: Not Supported 00:15:03.616 SGL Metadata Pointer: Not Supported 00:15:03.616 Oversized SGL: Not Supported 00:15:03.616 SGL Metadata Address: Not Supported 00:15:03.616 SGL Offset: Supported 00:15:03.617 Transport SGL Data Block: Not Supported 00:15:03.617 Replay Protected Memory Block: Not Supported 00:15:03.617 00:15:03.617 Firmware Slot Information 00:15:03.617 ========================= 00:15:03.617 Active slot: 1 00:15:03.617 Slot 1 Firmware Revision: 25.01 00:15:03.617 00:15:03.617 00:15:03.617 Commands Supported and Effects 00:15:03.617 ============================== 00:15:03.617 Admin Commands 00:15:03.617 -------------- 00:15:03.617 Get Log Page (02h): Supported 00:15:03.617 Identify (06h): Supported 00:15:03.617 Abort (08h): Supported 00:15:03.617 Set Features (09h): Supported 00:15:03.617 Get Features (0Ah): Supported 00:15:03.617 Asynchronous Event Request (0Ch): Supported 00:15:03.617 Keep Alive (18h): Supported 00:15:03.617 I/O Commands 00:15:03.617 ------------ 00:15:03.617 Flush (00h): Supported LBA-Change 00:15:03.617 Write (01h): Supported LBA-Change 00:15:03.617 Read (02h): Supported 00:15:03.617 Compare (05h): Supported 00:15:03.617 Write Zeroes (08h): Supported LBA-Change 00:15:03.617 Dataset Management (09h): Supported LBA-Change 00:15:03.617 Copy (19h): Supported LBA-Change 00:15:03.617 00:15:03.617 Error Log 00:15:03.617 ========= 00:15:03.617 00:15:03.617 Arbitration 00:15:03.617 =========== 00:15:03.617 Arbitration Burst: 1 00:15:03.617 00:15:03.617 Power Management 00:15:03.617 ================ 00:15:03.617 Number of Power States: 1 00:15:03.617 Current Power State: Power State #0 00:15:03.617 Power State #0: 00:15:03.617 Max Power: 0.00 W 00:15:03.617 Non-Operational State: Operational 00:15:03.617 Entry Latency: Not Reported 00:15:03.617 Exit Latency: Not Reported 00:15:03.617 Relative Read Throughput: 0 00:15:03.617 Relative Read Latency: 0 00:15:03.617 Relative Write Throughput: 0 00:15:03.617 Relative Write Latency: 0 00:15:03.617 Idle Power: Not Reported 00:15:03.617 Active Power: Not Reported 00:15:03.617 Non-Operational Permissive Mode: Not Supported 00:15:03.617 00:15:03.617 Health Information 00:15:03.617 ================== 00:15:03.617 Critical Warnings: 00:15:03.617 Available Spare Space: OK 00:15:03.617 Temperature: OK 00:15:03.617 Device Reliability: OK 00:15:03.617 Read Only: No 00:15:03.617 Volatile Memory Backup: OK 00:15:03.617 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:03.617 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:03.617 Available Spare: 0% 00:15:03.617 Available Spare Threshold: 0% 00:15:03.617 Life Percentage Used:[2024-11-15 10:58:50.272413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.272421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2420750) 00:15:03.617 [2024-11-15 10:58:50.272428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.617 [2024-11-15 10:58:50.272452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24851c0, cid 7, qid 0 00:15:03.617 [2024-11-15 10:58:50.272515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.617 [2024-11-15 10:58:50.272523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.617 [2024-11-15 10:58:50.272526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.272531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24851c0) on tqpair=0x2420750 00:15:03.617 [2024-11-15 10:58:50.272570] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:15:03.617 [2024-11-15 10:58:50.272583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484740) on tqpair=0x2420750 00:15:03.617 [2024-11-15 10:58:50.272605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.617 [2024-11-15 10:58:50.272612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24848c0) on tqpair=0x2420750 00:15:03.617 [2024-11-15 10:58:50.272617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.617 [2024-11-15 10:58:50.272622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484a40) on tqpair=0x2420750 00:15:03.617 [2024-11-15 10:58:50.272627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.617 [2024-11-15 10:58:50.272632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.617 [2024-11-15 10:58:50.272637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.617 [2024-11-15 10:58:50.272647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.272651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.272655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.617 [2024-11-15 10:58:50.272663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.617 [2024-11-15 10:58:50.272688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.617 [2024-11-15 10:58:50.272756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.617 [2024-11-15 10:58:50.272764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.617 [2024-11-15 10:58:50.272767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.272772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.617 [2024-11-15 10:58:50.272780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.272784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.272788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.617 [2024-11-15 10:58:50.272796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.617 [2024-11-15 10:58:50.272819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.617 [2024-11-15 10:58:50.272896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.617 [2024-11-15 10:58:50.272904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.617 [2024-11-15 10:58:50.272907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.272911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.617 [2024-11-15 10:58:50.272917] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:15:03.617 [2024-11-15 10:58:50.272922] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:15:03.617 [2024-11-15 10:58:50.272932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.272937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.272941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.617 [2024-11-15 10:58:50.272963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.617 [2024-11-15 10:58:50.272982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.617 [2024-11-15 10:58:50.273031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.617 [2024-11-15 10:58:50.273038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.617 [2024-11-15 10:58:50.273042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.273046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.617 [2024-11-15 10:58:50.273056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.273062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.617 [2024-11-15 10:58:50.273066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.617 [2024-11-15 10:58:50.273073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.617 [2024-11-15 10:58:50.273104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.617 [2024-11-15 10:58:50.273175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.617 [2024-11-15 10:58:50.273182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.617 [2024-11-15 10:58:50.273186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.618 [2024-11-15 10:58:50.273200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.618 [2024-11-15 10:58:50.273216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.618 [2024-11-15 10:58:50.273234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.618 [2024-11-15 10:58:50.273288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.618 [2024-11-15 10:58:50.273297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.618 [2024-11-15 10:58:50.273301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.618 [2024-11-15 10:58:50.273332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.618 [2024-11-15 10:58:50.273350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.618 [2024-11-15 10:58:50.273369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.618 [2024-11-15 10:58:50.273421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.618 [2024-11-15 10:58:50.273429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.618 [2024-11-15 10:58:50.273433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.618 [2024-11-15 10:58:50.273448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.618 [2024-11-15 10:58:50.273465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.618 [2024-11-15 10:58:50.273497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.618 [2024-11-15 10:58:50.273545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.618 [2024-11-15 10:58:50.273568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.618 [2024-11-15 10:58:50.273585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.618 [2024-11-15 10:58:50.273602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.618 [2024-11-15 10:58:50.273619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.618 [2024-11-15 10:58:50.273641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.618 [2024-11-15 10:58:50.273695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.618 [2024-11-15 10:58:50.273703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.618 [2024-11-15 10:58:50.273707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.618 [2024-11-15 10:58:50.273722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.618 [2024-11-15 10:58:50.273740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.618 [2024-11-15 10:58:50.273759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.618 [2024-11-15 10:58:50.273811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.618 [2024-11-15 10:58:50.273819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.618 [2024-11-15 10:58:50.273823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.618 [2024-11-15 10:58:50.273838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.618 [2024-11-15 10:58:50.273855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.618 [2024-11-15 10:58:50.273874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.618 [2024-11-15 10:58:50.273933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.618 [2024-11-15 10:58:50.273955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.618 [2024-11-15 10:58:50.273959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.618 [2024-11-15 10:58:50.273973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.273982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.618 [2024-11-15 10:58:50.273989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.618 [2024-11-15 10:58:50.274007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.618 [2024-11-15 10:58:50.274058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.618 [2024-11-15 10:58:50.274065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.618 [2024-11-15 10:58:50.274068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.274072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.618 [2024-11-15 10:58:50.274082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.274088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.274092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.618 [2024-11-15 10:58:50.274099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.618 [2024-11-15 10:58:50.274117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.618 [2024-11-15 10:58:50.274165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.618 [2024-11-15 10:58:50.274172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.618 [2024-11-15 10:58:50.274175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.274179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.618 [2024-11-15 10:58:50.274189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.274194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.274198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.618 [2024-11-15 10:58:50.274206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.618 [2024-11-15 10:58:50.274224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.618 [2024-11-15 10:58:50.274278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.618 [2024-11-15 10:58:50.274285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.618 [2024-11-15 10:58:50.274289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.274293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.618 [2024-11-15 10:58:50.274303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.274308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.274312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.618 [2024-11-15 10:58:50.274319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.618 [2024-11-15 10:58:50.274337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.618 [2024-11-15 10:58:50.274382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.618 [2024-11-15 10:58:50.274389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.618 [2024-11-15 10:58:50.274393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.274397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.618 [2024-11-15 10:58:50.274407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.618 [2024-11-15 10:58:50.274412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.619 [2024-11-15 10:58:50.274423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.619 [2024-11-15 10:58:50.274441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.619 [2024-11-15 10:58:50.274490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.619 [2024-11-15 10:58:50.274497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.619 [2024-11-15 10:58:50.274501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.619 [2024-11-15 10:58:50.274515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.619 [2024-11-15 10:58:50.274532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.619 [2024-11-15 10:58:50.274550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.619 [2024-11-15 10:58:50.274615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.619 [2024-11-15 10:58:50.274623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.619 [2024-11-15 10:58:50.274627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.619 [2024-11-15 10:58:50.274641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.619 [2024-11-15 10:58:50.274658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.619 [2024-11-15 10:58:50.274678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.619 [2024-11-15 10:58:50.274725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.619 [2024-11-15 10:58:50.274732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.619 [2024-11-15 10:58:50.274736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.619 [2024-11-15 10:58:50.274750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.619 [2024-11-15 10:58:50.274766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.619 [2024-11-15 10:58:50.274784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.619 [2024-11-15 10:58:50.274832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.619 [2024-11-15 10:58:50.274841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.619 [2024-11-15 10:58:50.274844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.619 [2024-11-15 10:58:50.274859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.619 [2024-11-15 10:58:50.274875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.619 [2024-11-15 10:58:50.274894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.619 [2024-11-15 10:58:50.274941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.619 [2024-11-15 10:58:50.274953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.619 [2024-11-15 10:58:50.274957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.619 [2024-11-15 10:58:50.274972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.274981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.619 [2024-11-15 10:58:50.274988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.619 [2024-11-15 10:58:50.275008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.619 [2024-11-15 10:58:50.275062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.619 [2024-11-15 10:58:50.275073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.619 [2024-11-15 10:58:50.275077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.619 [2024-11-15 10:58:50.275092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.619 [2024-11-15 10:58:50.275108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.619 [2024-11-15 10:58:50.275127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.619 [2024-11-15 10:58:50.275171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.619 [2024-11-15 10:58:50.275178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.619 [2024-11-15 10:58:50.275181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.619 [2024-11-15 10:58:50.275196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.619 [2024-11-15 10:58:50.275212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.619 [2024-11-15 10:58:50.275230] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.619 [2024-11-15 10:58:50.275277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.619 [2024-11-15 10:58:50.275284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.619 [2024-11-15 10:58:50.275287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.619 [2024-11-15 10:58:50.275302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.619 [2024-11-15 10:58:50.275318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.619 [2024-11-15 10:58:50.275335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.619 [2024-11-15 10:58:50.275382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.619 [2024-11-15 10:58:50.275389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.619 [2024-11-15 10:58:50.275393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.619 [2024-11-15 10:58:50.275407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.619 [2024-11-15 10:58:50.275423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.619 [2024-11-15 10:58:50.275441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.619 [2024-11-15 10:58:50.275494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.619 [2024-11-15 10:58:50.275500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.619 [2024-11-15 10:58:50.275504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.619 [2024-11-15 10:58:50.275518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.619 [2024-11-15 10:58:50.275564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.619 [2024-11-15 10:58:50.275585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.619 [2024-11-15 10:58:50.275634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.619 [2024-11-15 10:58:50.275641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.619 [2024-11-15 10:58:50.275645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.619 [2024-11-15 10:58:50.275660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.619 [2024-11-15 10:58:50.275670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.620 [2024-11-15 10:58:50.275677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.620 [2024-11-15 10:58:50.275697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.620 [2024-11-15 10:58:50.275741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.620 [2024-11-15 10:58:50.275749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.620 [2024-11-15 10:58:50.275752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.275767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.620 [2024-11-15 10:58:50.275805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.275811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.275815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.620 [2024-11-15 10:58:50.275823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.620 [2024-11-15 10:58:50.275843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.620 [2024-11-15 10:58:50.275899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.620 [2024-11-15 10:58:50.275906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.620 [2024-11-15 10:58:50.275910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.275914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.620 [2024-11-15 10:58:50.275925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.275931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.275935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.620 [2024-11-15 10:58:50.275943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.620 [2024-11-15 10:58:50.275962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.620 [2024-11-15 10:58:50.276011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.620 [2024-11-15 10:58:50.276018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.620 [2024-11-15 10:58:50.276022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.620 [2024-11-15 10:58:50.276038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276043] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.620 [2024-11-15 10:58:50.276055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.620 [2024-11-15 10:58:50.276074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.620 [2024-11-15 10:58:50.276163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.620 [2024-11-15 10:58:50.276170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.620 [2024-11-15 10:58:50.276174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.620 [2024-11-15 10:58:50.276188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.620 [2024-11-15 10:58:50.276204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.620 [2024-11-15 10:58:50.276222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.620 [2024-11-15 10:58:50.276269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.620 [2024-11-15 10:58:50.276276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.620 [2024-11-15 10:58:50.276280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.620 [2024-11-15 10:58:50.276294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.620 [2024-11-15 10:58:50.276311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.620 [2024-11-15 10:58:50.276329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.620 [2024-11-15 10:58:50.276381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.620 [2024-11-15 10:58:50.276389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.620 [2024-11-15 10:58:50.276392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.620 [2024-11-15 10:58:50.276407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.620 [2024-11-15 10:58:50.276423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.620 [2024-11-15 10:58:50.276441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.620 [2024-11-15 10:58:50.276490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.620 [2024-11-15 10:58:50.276497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.620 [2024-11-15 10:58:50.276500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.620 [2024-11-15 10:58:50.276515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.276524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.620 [2024-11-15 10:58:50.276531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.620 [2024-11-15 10:58:50.276565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.620 [2024-11-15 10:58:50.280607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.620 [2024-11-15 10:58:50.280628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.620 [2024-11-15 10:58:50.280649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.280654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.620 [2024-11-15 10:58:50.280667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.280673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.280676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2420750) 00:15:03.620 [2024-11-15 10:58:50.280685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:03.620 [2024-11-15 10:58:50.280711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2484bc0, cid 3, qid 0 00:15:03.620 [2024-11-15 10:58:50.280764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:03.620 [2024-11-15 10:58:50.280772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:03.620 [2024-11-15 10:58:50.280775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:03.620 [2024-11-15 10:58:50.280779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2484bc0) on tqpair=0x2420750 00:15:03.620 [2024-11-15 10:58:50.280787] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:15:03.620 0% 00:15:03.620 Data Units Read: 0 00:15:03.620 Data Units Written: 0 00:15:03.620 Host Read Commands: 0 00:15:03.620 Host Write Commands: 0 00:15:03.620 Controller Busy Time: 0 minutes 00:15:03.620 Power Cycles: 0 00:15:03.620 Power On Hours: 0 hours 00:15:03.620 Unsafe Shutdowns: 0 00:15:03.620 Unrecoverable Media Errors: 0 00:15:03.620 Lifetime Error Log Entries: 0 00:15:03.620 Warning Temperature Time: 0 minutes 00:15:03.620 Critical Temperature Time: 0 minutes 00:15:03.620 00:15:03.620 Number of Queues 00:15:03.620 ================ 00:15:03.620 Number of I/O Submission Queues: 127 00:15:03.620 Number of I/O Completion Queues: 127 00:15:03.620 00:15:03.620 Active Namespaces 00:15:03.620 ================= 00:15:03.620 Namespace ID:1 00:15:03.620 Error Recovery Timeout: Unlimited 00:15:03.620 Command Set Identifier: NVM (00h) 00:15:03.620 Deallocate: Supported 00:15:03.620 Deallocated/Unwritten Error: Not Supported 00:15:03.620 Deallocated Read Value: Unknown 00:15:03.620 Deallocate in Write Zeroes: Not Supported 00:15:03.620 Deallocated Guard Field: 0xFFFF 00:15:03.620 Flush: Supported 00:15:03.620 Reservation: Supported 00:15:03.620 Namespace Sharing Capabilities: Multiple Controllers 00:15:03.620 Size (in LBAs): 131072 (0GiB) 00:15:03.620 Capacity (in LBAs): 131072 (0GiB) 00:15:03.621 Utilization (in LBAs): 131072 (0GiB) 00:15:03.621 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:03.621 EUI64: ABCDEF0123456789 00:15:03.621 UUID: c182177f-255b-45ec-b5d9-e498d58df194 00:15:03.621 Thin Provisioning: Not Supported 00:15:03.621 Per-NS Atomic Units: Yes 00:15:03.621 Atomic Boundary Size (Normal): 0 00:15:03.621 Atomic Boundary Size (PFail): 0 00:15:03.621 Atomic Boundary Offset: 0 00:15:03.621 Maximum Single Source Range Length: 65535 00:15:03.621 Maximum Copy Length: 65535 00:15:03.621 Maximum Source Range Count: 1 00:15:03.621 NGUID/EUI64 Never Reused: No 00:15:03.621 Namespace Write Protected: No 00:15:03.621 Number of LBA Formats: 1 00:15:03.621 Current LBA Format: LBA Format #00 00:15:03.621 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:03.621 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:03.621 rmmod nvme_tcp 00:15:03.621 rmmod nvme_fabrics 00:15:03.621 rmmod nvme_keyring 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73890 ']' 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73890 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 73890 ']' 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 73890 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73890 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:03.621 killing process with pid 73890 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73890' 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 73890 00:15:03.621 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 73890 00:15:03.879 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:03.879 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:03.879 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:03.879 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:15:03.879 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:15:03.879 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:15:03.879 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:03.879 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:03.879 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:03.879 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:03.879 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:15:04.138 00:15:04.138 real 0m2.869s 00:15:04.138 user 0m7.449s 00:15:04.138 sys 0m0.732s 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.138 ************************************ 00:15:04.138 END TEST nvmf_identify 00:15:04.138 ************************************ 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:04.138 10:58:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:04.139 10:58:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.139 10:58:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:04.399 ************************************ 00:15:04.399 START TEST nvmf_perf 00:15:04.399 ************************************ 00:15:04.399 10:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:04.399 * Looking for test storage... 00:15:04.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.399 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:04.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.399 --rc genhtml_branch_coverage=1 00:15:04.399 --rc genhtml_function_coverage=1 00:15:04.399 --rc genhtml_legend=1 00:15:04.399 --rc geninfo_all_blocks=1 00:15:04.399 --rc geninfo_unexecuted_blocks=1 00:15:04.399 00:15:04.400 ' 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:04.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.400 --rc genhtml_branch_coverage=1 00:15:04.400 --rc genhtml_function_coverage=1 00:15:04.400 --rc genhtml_legend=1 00:15:04.400 --rc geninfo_all_blocks=1 00:15:04.400 --rc geninfo_unexecuted_blocks=1 00:15:04.400 00:15:04.400 ' 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:04.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.400 --rc genhtml_branch_coverage=1 00:15:04.400 --rc genhtml_function_coverage=1 00:15:04.400 --rc genhtml_legend=1 00:15:04.400 --rc geninfo_all_blocks=1 00:15:04.400 --rc geninfo_unexecuted_blocks=1 00:15:04.400 00:15:04.400 ' 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:04.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.400 --rc genhtml_branch_coverage=1 00:15:04.400 --rc genhtml_function_coverage=1 00:15:04.400 --rc genhtml_legend=1 00:15:04.400 --rc geninfo_all_blocks=1 00:15:04.400 --rc geninfo_unexecuted_blocks=1 00:15:04.400 00:15:04.400 ' 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:04.400 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:04.400 Cannot find device "nvmf_init_br" 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:04.400 Cannot find device "nvmf_init_br2" 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:04.400 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:04.659 Cannot find device "nvmf_tgt_br" 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:04.659 Cannot find device "nvmf_tgt_br2" 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:04.659 Cannot find device "nvmf_init_br" 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:04.659 Cannot find device "nvmf_init_br2" 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:04.659 Cannot find device "nvmf_tgt_br" 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:04.659 Cannot find device "nvmf_tgt_br2" 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:04.659 Cannot find device "nvmf_br" 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:04.659 Cannot find device "nvmf_init_if" 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:04.659 Cannot find device "nvmf_init_if2" 00:15:04.659 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:04.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:04.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:04.660 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:04.919 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:04.919 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:15:04.919 00:15:04.919 --- 10.0.0.3 ping statistics --- 00:15:04.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.919 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:04.919 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:04.919 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:15:04.919 00:15:04.919 --- 10.0.0.4 ping statistics --- 00:15:04.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.919 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:04.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:15:04.919 00:15:04.919 --- 10.0.0.1 ping statistics --- 00:15:04.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.919 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:04.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:04.919 00:15:04.919 --- 10.0.0.2 ping statistics --- 00:15:04.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.919 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74146 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74146 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74146 ']' 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.919 10:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:04.919 [2024-11-15 10:58:51.675740] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:04.919 [2024-11-15 10:58:51.675854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.178 [2024-11-15 10:58:51.819714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.179 [2024-11-15 10:58:51.878877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.179 [2024-11-15 10:58:51.878932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.179 [2024-11-15 10:58:51.878958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.179 [2024-11-15 10:58:51.878966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.179 [2024-11-15 10:58:51.878973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.179 [2024-11-15 10:58:51.880172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.179 [2024-11-15 10:58:51.880377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.179 [2024-11-15 10:58:51.880503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.179 [2024-11-15 10:58:51.880503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.179 [2024-11-15 10:58:51.934642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:05.179 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.179 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:15:05.179 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:05.179 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:05.179 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:05.437 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.437 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:05.437 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:05.697 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:05.697 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:05.956 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:05.956 10:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:06.215 10:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:06.215 10:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:06.215 10:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:06.215 10:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:06.215 10:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:06.782 [2024-11-15 10:58:53.351155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.782 10:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:06.782 10:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:06.782 10:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:07.041 10:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:07.041 10:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:07.300 10:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:07.559 [2024-11-15 10:58:54.340997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:07.559 10:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:07.817 10:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:07.817 10:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:07.817 10:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:07.817 10:58:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:09.192 Initializing NVMe Controllers 00:15:09.192 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:09.192 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:09.192 Initialization complete. Launching workers. 00:15:09.192 ======================================================== 00:15:09.192 Latency(us) 00:15:09.192 Device Information : IOPS MiB/s Average min max 00:15:09.192 PCIE (0000:00:10.0) NSID 1 from core 0: 23620.01 92.27 1354.11 364.05 7958.14 00:15:09.192 ======================================================== 00:15:09.192 Total : 23620.01 92.27 1354.11 364.05 7958.14 00:15:09.192 00:15:09.192 10:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:10.570 Initializing NVMe Controllers 00:15:10.570 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:10.570 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:10.570 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:10.570 Initialization complete. Launching workers. 00:15:10.570 ======================================================== 00:15:10.570 Latency(us) 00:15:10.570 Device Information : IOPS MiB/s Average min max 00:15:10.570 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3371.87 13.17 296.18 106.61 4307.00 00:15:10.570 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8194.22 5048.70 12048.03 00:15:10.570 ======================================================== 00:15:10.570 Total : 3494.86 13.65 574.14 106.61 12048.03 00:15:10.570 00:15:10.570 10:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:11.990 Initializing NVMe Controllers 00:15:11.990 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:11.990 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:11.990 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:11.990 Initialization complete. Launching workers. 00:15:11.990 ======================================================== 00:15:11.990 Latency(us) 00:15:11.990 Device Information : IOPS MiB/s Average min max 00:15:11.990 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8953.31 34.97 3575.24 541.63 9483.02 00:15:11.990 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3920.26 15.31 8176.32 5156.13 16661.88 00:15:11.990 ======================================================== 00:15:11.990 Total : 12873.57 50.29 4976.36 541.63 16661.88 00:15:11.990 00:15:11.990 10:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:11.990 10:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:14.549 Initializing NVMe Controllers 00:15:14.549 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:14.549 Controller IO queue size 128, less than required. 00:15:14.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:14.549 Controller IO queue size 128, less than required. 00:15:14.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:14.549 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:14.549 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:14.549 Initialization complete. Launching workers. 00:15:14.549 ======================================================== 00:15:14.549 Latency(us) 00:15:14.549 Device Information : IOPS MiB/s Average min max 00:15:14.549 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1720.22 430.06 75448.27 52261.40 127075.93 00:15:14.549 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 638.53 159.63 203539.38 71618.66 309221.96 00:15:14.549 ======================================================== 00:15:14.549 Total : 2358.75 589.69 110123.22 52261.40 309221.96 00:15:14.549 00:15:14.549 10:59:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:14.549 Initializing NVMe Controllers 00:15:14.549 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:14.549 Controller IO queue size 128, less than required. 00:15:14.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:14.549 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:14.549 Controller IO queue size 128, less than required. 00:15:14.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:14.549 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:14.549 WARNING: Some requested NVMe devices were skipped 00:15:14.549 No valid NVMe controllers or AIO or URING devices found 00:15:14.549 10:59:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:17.081 Initializing NVMe Controllers 00:15:17.081 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:17.081 Controller IO queue size 128, less than required. 00:15:17.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:17.081 Controller IO queue size 128, less than required. 00:15:17.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:17.081 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:17.081 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:17.081 Initialization complete. Launching workers. 00:15:17.081 00:15:17.081 ==================== 00:15:17.081 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:17.081 TCP transport: 00:15:17.081 polls: 8946 00:15:17.081 idle_polls: 5632 00:15:17.081 sock_completions: 3314 00:15:17.081 nvme_completions: 5343 00:15:17.081 submitted_requests: 7896 00:15:17.081 queued_requests: 1 00:15:17.081 00:15:17.081 ==================== 00:15:17.081 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:17.081 TCP transport: 00:15:17.081 polls: 12141 00:15:17.081 idle_polls: 8507 00:15:17.081 sock_completions: 3634 00:15:17.081 nvme_completions: 5963 00:15:17.081 submitted_requests: 8964 00:15:17.081 queued_requests: 1 00:15:17.081 ======================================================== 00:15:17.081 Latency(us) 00:15:17.081 Device Information : IOPS MiB/s Average min max 00:15:17.081 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1334.61 333.65 97392.99 50924.19 152517.52 00:15:17.081 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1489.51 372.38 86404.45 39823.02 127281.45 00:15:17.081 ======================================================== 00:15:17.082 Total : 2824.11 706.03 91597.37 39823.02 152517.52 00:15:17.082 00:15:17.082 10:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:17.082 10:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.340 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:17.340 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:17.340 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:17.340 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:17.340 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:17.340 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:17.340 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:17.340 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:17.340 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:17.340 rmmod nvme_tcp 00:15:17.340 rmmod nvme_fabrics 00:15:17.599 rmmod nvme_keyring 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74146 ']' 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74146 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74146 ']' 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74146 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74146 00:15:17.599 killing process with pid 74146 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74146' 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74146 00:15:17.599 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74146 00:15:17.858 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:17.858 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:17.858 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:17.858 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:17.858 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:17.858 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:17.858 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:17.858 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:17.858 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:17.858 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:18.119 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:18.119 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:18.119 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.119 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:18.119 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:18.119 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:18.119 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:18.119 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:18.119 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:18.119 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:18.120 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.120 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.120 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:18.120 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.120 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.120 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.120 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:18.120 ************************************ 00:15:18.120 END TEST nvmf_perf 00:15:18.120 ************************************ 00:15:18.120 00:15:18.120 real 0m13.944s 00:15:18.120 user 0m49.809s 00:15:18.120 sys 0m4.272s 00:15:18.120 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.120 10:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:18.381 10:59:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:18.381 10:59:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:18.381 10:59:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.381 10:59:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.381 ************************************ 00:15:18.381 START TEST nvmf_fio_host 00:15:18.381 ************************************ 00:15:18.381 10:59:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:18.381 * Looking for test storage... 00:15:18.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:18.381 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:18.381 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:18.381 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:18.381 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:18.381 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.381 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.381 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.381 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.381 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.381 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.381 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:18.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.382 --rc genhtml_branch_coverage=1 00:15:18.382 --rc genhtml_function_coverage=1 00:15:18.382 --rc genhtml_legend=1 00:15:18.382 --rc geninfo_all_blocks=1 00:15:18.382 --rc geninfo_unexecuted_blocks=1 00:15:18.382 00:15:18.382 ' 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:18.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.382 --rc genhtml_branch_coverage=1 00:15:18.382 --rc genhtml_function_coverage=1 00:15:18.382 --rc genhtml_legend=1 00:15:18.382 --rc geninfo_all_blocks=1 00:15:18.382 --rc geninfo_unexecuted_blocks=1 00:15:18.382 00:15:18.382 ' 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:18.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.382 --rc genhtml_branch_coverage=1 00:15:18.382 --rc genhtml_function_coverage=1 00:15:18.382 --rc genhtml_legend=1 00:15:18.382 --rc geninfo_all_blocks=1 00:15:18.382 --rc geninfo_unexecuted_blocks=1 00:15:18.382 00:15:18.382 ' 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:18.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.382 --rc genhtml_branch_coverage=1 00:15:18.382 --rc genhtml_function_coverage=1 00:15:18.382 --rc genhtml_legend=1 00:15:18.382 --rc geninfo_all_blocks=1 00:15:18.382 --rc geninfo_unexecuted_blocks=1 00:15:18.382 00:15:18.382 ' 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.382 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:18.383 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:18.383 Cannot find device "nvmf_init_br" 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:18.383 Cannot find device "nvmf_init_br2" 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:18.383 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:18.642 Cannot find device "nvmf_tgt_br" 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.642 Cannot find device "nvmf_tgt_br2" 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:18.642 Cannot find device "nvmf_init_br" 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:18.642 Cannot find device "nvmf_init_br2" 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:18.642 Cannot find device "nvmf_tgt_br" 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:18.642 Cannot find device "nvmf_tgt_br2" 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:18.642 Cannot find device "nvmf_br" 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:18.642 Cannot find device "nvmf_init_if" 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:18.642 Cannot find device "nvmf_init_if2" 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:18.642 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:18.643 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:18.902 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:18.902 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:18.902 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:18.902 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:18.902 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:18.902 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:18.902 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:18.902 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:18.902 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:18.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:18.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:18.902 00:15:18.902 --- 10.0.0.3 ping statistics --- 00:15:18.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.902 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:18.903 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:18.903 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:15:18.903 00:15:18.903 --- 10.0.0.4 ping statistics --- 00:15:18.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.903 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:18.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:18.903 00:15:18.903 --- 10.0.0.1 ping statistics --- 00:15:18.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.903 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:18.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:15:18.903 00:15:18.903 --- 10.0.0.2 ping statistics --- 00:15:18.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.903 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74601 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74601 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74601 ']' 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.903 10:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.903 [2024-11-15 10:59:05.672001] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:18.903 [2024-11-15 10:59:05.672077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.162 [2024-11-15 10:59:05.819312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.162 [2024-11-15 10:59:05.876110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.162 [2024-11-15 10:59:05.876182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.162 [2024-11-15 10:59:05.876194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.162 [2024-11-15 10:59:05.876202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.162 [2024-11-15 10:59:05.876210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.162 [2024-11-15 10:59:05.877579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.162 [2024-11-15 10:59:05.877618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.162 [2024-11-15 10:59:05.877691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.162 [2024-11-15 10:59:05.877703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.162 [2024-11-15 10:59:05.948430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:19.420 10:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.420 10:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:15:19.420 10:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:19.679 [2024-11-15 10:59:06.330515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.679 10:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:19.679 10:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:19.679 10:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:19.679 10:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:19.938 Malloc1 00:15:19.938 10:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:20.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.455 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:20.714 [2024-11-15 10:59:07.438218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:20.714 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:20.972 10:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:21.231 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:21.231 fio-3.35 00:15:21.231 Starting 1 thread 00:15:23.764 00:15:23.764 test: (groupid=0, jobs=1): err= 0: pid=74671: Fri Nov 15 10:59:10 2024 00:15:23.764 read: IOPS=9055, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2007msec) 00:15:23.764 slat (nsec): min=1730, max=321431, avg=2455.28, stdev=3472.90 00:15:23.764 clat (usec): min=2598, max=12632, avg=7361.33, stdev=604.70 00:15:23.764 lat (usec): min=2643, max=12634, avg=7363.78, stdev=604.52 00:15:23.764 clat percentiles (usec): 00:15:23.764 | 1.00th=[ 6194], 5.00th=[ 6521], 10.00th=[ 6652], 20.00th=[ 6915], 00:15:23.764 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:15:23.764 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8455], 00:15:23.764 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[10814], 99.95th=[11600], 00:15:23.764 | 99.99th=[12518] 00:15:23.764 bw ( KiB/s): min=35048, max=37056, per=99.94%, avg=36202.00, stdev=896.91, samples=4 00:15:23.764 iops : min= 8762, max= 9264, avg=9050.50, stdev=224.23, samples=4 00:15:23.764 write: IOPS=9067, BW=35.4MiB/s (37.1MB/s)(71.1MiB/2007msec); 0 zone resets 00:15:23.764 slat (nsec): min=1804, max=257224, avg=2525.00, stdev=2701.58 00:15:23.764 clat (usec): min=2415, max=12532, avg=6718.39, stdev=548.29 00:15:23.764 lat (usec): min=2429, max=12534, avg=6720.92, stdev=548.25 00:15:23.764 clat percentiles (usec): 00:15:23.764 | 1.00th=[ 5669], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:15:23.764 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[ 6783], 00:15:23.764 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7373], 95.00th=[ 7635], 00:15:23.764 | 99.00th=[ 8291], 99.50th=[ 8455], 99.90th=[ 9896], 99.95th=[10814], 00:15:23.764 | 99.99th=[12518] 00:15:23.764 bw ( KiB/s): min=35072, max=36928, per=100.00%, avg=36290.00, stdev=844.99, samples=4 00:15:23.764 iops : min= 8768, max= 9232, avg=9072.50, stdev=211.25, samples=4 00:15:23.764 lat (msec) : 4=0.07%, 10=99.80%, 20=0.12% 00:15:23.764 cpu : usr=71.83%, sys=21.49%, ctx=20, majf=0, minf=7 00:15:23.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:23.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.764 issued rwts: total=18175,18198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.764 00:15:23.764 Run status group 0 (all jobs): 00:15:23.764 READ: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.4MB), run=2007-2007msec 00:15:23.764 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.1MiB (74.5MB), run=2007-2007msec 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:23.764 10:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:23.764 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:23.764 fio-3.35 00:15:23.764 Starting 1 thread 00:15:26.298 00:15:26.298 test: (groupid=0, jobs=1): err= 0: pid=74724: Fri Nov 15 10:59:12 2024 00:15:26.298 read: IOPS=8183, BW=128MiB/s (134MB/s)(257MiB/2008msec) 00:15:26.298 slat (usec): min=2, max=104, avg= 3.68, stdev= 2.65 00:15:26.298 clat (usec): min=2076, max=17557, avg=8661.28, stdev=2423.54 00:15:26.298 lat (usec): min=2079, max=17560, avg=8664.96, stdev=2423.60 00:15:26.298 clat percentiles (usec): 00:15:26.298 | 1.00th=[ 4293], 5.00th=[ 5014], 10.00th=[ 5604], 20.00th=[ 6587], 00:15:26.298 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9110], 00:15:26.298 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11863], 95.00th=[13042], 00:15:26.298 | 99.00th=[15401], 99.50th=[16057], 99.90th=[16909], 99.95th=[17171], 00:15:26.298 | 99.99th=[17433] 00:15:26.298 bw ( KiB/s): min=62016, max=73504, per=51.17%, avg=67008.00, stdev=5266.89, samples=4 00:15:26.298 iops : min= 3876, max= 4594, avg=4188.00, stdev=329.18, samples=4 00:15:26.298 write: IOPS=4730, BW=73.9MiB/s (77.5MB/s)(137MiB/1849msec); 0 zone resets 00:15:26.298 slat (usec): min=29, max=482, avg=36.92, stdev=11.81 00:15:26.298 clat (usec): min=1878, max=20335, avg=12444.59, stdev=2338.44 00:15:26.298 lat (usec): min=1911, max=20401, avg=12481.51, stdev=2340.60 00:15:26.298 clat percentiles (usec): 00:15:26.298 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:15:26.298 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12256], 60.00th=[12911], 00:15:26.298 | 70.00th=[13698], 80.00th=[14484], 90.00th=[15664], 95.00th=[16581], 00:15:26.298 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19792], 99.95th=[20055], 00:15:26.298 | 99.99th=[20317] 00:15:26.298 bw ( KiB/s): min=64384, max=76192, per=91.86%, avg=69520.00, stdev=5636.27, samples=4 00:15:26.298 iops : min= 4024, max= 4762, avg=4345.00, stdev=352.27, samples=4 00:15:26.298 lat (msec) : 2=0.01%, 4=0.31%, 10=51.93%, 20=47.73%, 50=0.02% 00:15:26.298 cpu : usr=79.58%, sys=15.79%, ctx=16, majf=0, minf=12 00:15:26.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:26.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:26.298 issued rwts: total=16433,8746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.298 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:26.298 00:15:26.298 Run status group 0 (all jobs): 00:15:26.298 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=257MiB (269MB), run=2008-2008msec 00:15:26.298 WRITE: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=137MiB (143MB), run=1849-1849msec 00:15:26.298 10:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.298 rmmod nvme_tcp 00:15:26.298 rmmod nvme_fabrics 00:15:26.298 rmmod nvme_keyring 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74601 ']' 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74601 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74601 ']' 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74601 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.298 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74601 00:15:26.559 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.559 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.559 killing process with pid 74601 00:15:26.559 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74601' 00:15:26.559 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74601 00:15:26.559 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74601 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:26.844 00:15:26.844 real 0m8.705s 00:15:26.844 user 0m34.469s 00:15:26.844 sys 0m2.447s 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.844 10:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:26.844 ************************************ 00:15:26.844 END TEST nvmf_fio_host 00:15:26.844 ************************************ 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.104 ************************************ 00:15:27.104 START TEST nvmf_failover 00:15:27.104 ************************************ 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:27.104 * Looking for test storage... 00:15:27.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:27.104 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:27.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.105 --rc genhtml_branch_coverage=1 00:15:27.105 --rc genhtml_function_coverage=1 00:15:27.105 --rc genhtml_legend=1 00:15:27.105 --rc geninfo_all_blocks=1 00:15:27.105 --rc geninfo_unexecuted_blocks=1 00:15:27.105 00:15:27.105 ' 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:27.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.105 --rc genhtml_branch_coverage=1 00:15:27.105 --rc genhtml_function_coverage=1 00:15:27.105 --rc genhtml_legend=1 00:15:27.105 --rc geninfo_all_blocks=1 00:15:27.105 --rc geninfo_unexecuted_blocks=1 00:15:27.105 00:15:27.105 ' 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:27.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.105 --rc genhtml_branch_coverage=1 00:15:27.105 --rc genhtml_function_coverage=1 00:15:27.105 --rc genhtml_legend=1 00:15:27.105 --rc geninfo_all_blocks=1 00:15:27.105 --rc geninfo_unexecuted_blocks=1 00:15:27.105 00:15:27.105 ' 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:27.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.105 --rc genhtml_branch_coverage=1 00:15:27.105 --rc genhtml_function_coverage=1 00:15:27.105 --rc genhtml_legend=1 00:15:27.105 --rc geninfo_all_blocks=1 00:15:27.105 --rc geninfo_unexecuted_blocks=1 00:15:27.105 00:15:27.105 ' 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.105 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:27.105 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:27.365 Cannot find device "nvmf_init_br" 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:27.365 Cannot find device "nvmf_init_br2" 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:27.365 10:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:27.365 Cannot find device "nvmf_tgt_br" 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.365 Cannot find device "nvmf_tgt_br2" 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:27.365 Cannot find device "nvmf_init_br" 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:27.365 Cannot find device "nvmf_init_br2" 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:27.365 Cannot find device "nvmf_tgt_br" 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:27.365 Cannot find device "nvmf_tgt_br2" 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:27.365 Cannot find device "nvmf_br" 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:27.365 Cannot find device "nvmf_init_if" 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:27.365 Cannot find device "nvmf_init_if2" 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.365 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:27.366 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:27.366 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:27.366 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:27.366 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:27.366 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:27.366 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:27.366 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:27.366 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:27.366 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.366 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:27.625 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:27.625 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:15:27.625 00:15:27.625 --- 10.0.0.3 ping statistics --- 00:15:27.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.625 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:27.625 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:27.625 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:15:27.625 00:15:27.625 --- 10.0.0.4 ping statistics --- 00:15:27.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.625 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:27.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:27.625 00:15:27.625 --- 10.0.0.1 ping statistics --- 00:15:27.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.625 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:27.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:27.625 00:15:27.625 --- 10.0.0.2 ping statistics --- 00:15:27.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.625 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74984 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74984 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 74984 ']' 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.625 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:27.625 [2024-11-15 10:59:14.413019] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:27.625 [2024-11-15 10:59:14.413103] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.885 [2024-11-15 10:59:14.563708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.885 [2024-11-15 10:59:14.631254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.885 [2024-11-15 10:59:14.631327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.885 [2024-11-15 10:59:14.631343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.885 [2024-11-15 10:59:14.631354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.885 [2024-11-15 10:59:14.631364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.885 [2024-11-15 10:59:14.632789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.885 [2024-11-15 10:59:14.632925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.885 [2024-11-15 10:59:14.632932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.885 [2024-11-15 10:59:14.693750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.142 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.142 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:28.142 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:28.142 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:28.142 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:28.142 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.142 10:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:28.400 [2024-11-15 10:59:15.032327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.400 10:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:28.659 Malloc0 00:15:28.659 10:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:28.917 10:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:29.176 10:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:29.434 [2024-11-15 10:59:16.156751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:29.435 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:29.693 [2024-11-15 10:59:16.392877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:29.693 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:29.952 [2024-11-15 10:59:16.625234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:29.952 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:29.952 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75040 00:15:29.952 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.952 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75040 /var/tmp/bdevperf.sock 00:15:29.952 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75040 ']' 00:15:29.952 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.952 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.952 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.952 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.952 10:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:30.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:30.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:30.469 NVMe0n1 00:15:30.470 10:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:31.037 00:15:31.037 10:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75056 00:15:31.037 10:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:31.037 10:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:31.973 10:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:32.232 10:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:35.521 10:59:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:35.521 00:15:35.521 10:59:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:35.780 10:59:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:39.070 10:59:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:39.070 [2024-11-15 10:59:25.884059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:39.070 10:59:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:40.064 10:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:40.631 10:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75056 00:15:47.225 { 00:15:47.225 "results": [ 00:15:47.225 { 00:15:47.225 "job": "NVMe0n1", 00:15:47.225 "core_mask": "0x1", 00:15:47.225 "workload": "verify", 00:15:47.225 "status": "finished", 00:15:47.225 "verify_range": { 00:15:47.225 "start": 0, 00:15:47.225 "length": 16384 00:15:47.226 }, 00:15:47.226 "queue_depth": 128, 00:15:47.226 "io_size": 4096, 00:15:47.226 "runtime": 15.010323, 00:15:47.226 "iops": 9442.435049532245, 00:15:47.226 "mibps": 36.88451191223533, 00:15:47.226 "io_failed": 3637, 00:15:47.226 "io_timeout": 0, 00:15:47.226 "avg_latency_us": 13188.49445029989, 00:15:47.226 "min_latency_us": 595.7818181818182, 00:15:47.226 "max_latency_us": 25618.618181818183 00:15:47.226 } 00:15:47.226 ], 00:15:47.226 "core_count": 1 00:15:47.226 } 00:15:47.226 10:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75040 00:15:47.226 10:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75040 ']' 00:15:47.226 10:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75040 00:15:47.226 10:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:47.226 10:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.226 10:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75040 00:15:47.226 killing process with pid 75040 00:15:47.226 10:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.226 10:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.226 10:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75040' 00:15:47.226 10:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75040 00:15:47.226 10:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75040 00:15:47.226 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:47.226 [2024-11-15 10:59:16.681951] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:47.226 [2024-11-15 10:59:16.682045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75040 ] 00:15:47.226 [2024-11-15 10:59:16.831271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.226 [2024-11-15 10:59:16.891752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.226 [2024-11-15 10:59:16.951391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:47.226 Running I/O for 15 seconds... 00:15:47.226 7573.00 IOPS, 29.58 MiB/s [2024-11-15T10:59:34.087Z] [2024-11-15 10:59:18.948798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.226 [2024-11-15 10:59:18.948855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.948885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.226 [2024-11-15 10:59:18.948902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.948918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.226 [2024-11-15 10:59:18.948931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.948945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.226 [2024-11-15 10:59:18.948958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.948972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.226 [2024-11-15 10:59:18.948985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.948999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.226 [2024-11-15 10:59:18.949012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.226 [2024-11-15 10:59:18.949039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.226 [2024-11-15 10:59:18.949066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.226 [2024-11-15 10:59:18.949093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.226 [2024-11-15 10:59:18.949120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1afc0 is same with the state(6) to be set 00:15:47.226 [2024-11-15 10:59:18.949177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.226 [2024-11-15 10:59:18.949188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.226 [2024-11-15 10:59:18.949199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69800 len:8 PRP1 0x0 PRP2 0x0 00:15:47.226 [2024-11-15 10:59:18.949211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.226 [2024-11-15 10:59:18.949235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.226 [2024-11-15 10:59:18.949245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69928 len:8 PRP1 0x0 PRP2 0x0 00:15:47.226 [2024-11-15 10:59:18.949257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.226 [2024-11-15 10:59:18.949278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.226 [2024-11-15 10:59:18.949289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69936 len:8 PRP1 0x0 PRP2 0x0 00:15:47.226 [2024-11-15 10:59:18.949301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.226 [2024-11-15 10:59:18.949347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.226 [2024-11-15 10:59:18.949357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69944 len:8 PRP1 0x0 PRP2 0x0 00:15:47.226 [2024-11-15 10:59:18.949370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.226 [2024-11-15 10:59:18.949392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.226 [2024-11-15 10:59:18.949401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69952 len:8 PRP1 0x0 PRP2 0x0 00:15:47.226 [2024-11-15 10:59:18.949414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.226 [2024-11-15 10:59:18.949436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.226 [2024-11-15 10:59:18.949445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69960 len:8 PRP1 0x0 PRP2 0x0 00:15:47.226 [2024-11-15 10:59:18.949458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.226 [2024-11-15 10:59:18.949479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.226 [2024-11-15 10:59:18.949489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69968 len:8 PRP1 0x0 PRP2 0x0 00:15:47.226 [2024-11-15 10:59:18.949502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.226 [2024-11-15 10:59:18.949524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.226 [2024-11-15 10:59:18.949534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69976 len:8 PRP1 0x0 PRP2 0x0 00:15:47.226 [2024-11-15 10:59:18.949570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.226 [2024-11-15 10:59:18.949596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.226 [2024-11-15 10:59:18.949605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69984 len:8 PRP1 0x0 PRP2 0x0 00:15:47.226 [2024-11-15 10:59:18.949618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.226 [2024-11-15 10:59:18.949640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.226 [2024-11-15 10:59:18.949650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69992 len:8 PRP1 0x0 PRP2 0x0 00:15:47.226 [2024-11-15 10:59:18.949663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.226 [2024-11-15 10:59:18.949676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.226 [2024-11-15 10:59:18.949685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.226 [2024-11-15 10:59:18.949695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70000 len:8 PRP1 0x0 PRP2 0x0 00:15:47.226 [2024-11-15 10:59:18.949707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.949739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.949750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.949759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70008 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.949771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.949784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.949793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.949804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70016 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.949816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.949828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.949837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.949847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70024 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.949859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.949872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.949881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.949891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70032 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.949903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.949916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.949925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.949941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70040 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.949954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.949967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.949976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.949986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70048 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.949998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70056 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70064 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70072 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70080 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70088 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70096 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70104 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70112 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70120 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70128 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70136 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70144 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70152 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70160 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70168 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70176 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.227 [2024-11-15 10:59:18.950758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.227 [2024-11-15 10:59:18.950768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70184 len:8 PRP1 0x0 PRP2 0x0 00:15:47.227 [2024-11-15 10:59:18.950780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.227 [2024-11-15 10:59:18.950792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.950801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.950811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70192 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.950823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.950840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.950850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.950860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70200 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.950872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.950884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.950893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.950903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70208 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.950914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.950939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.950948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.950958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70216 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.950970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.950982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.950991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70224 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70232 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70240 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70248 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70256 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70264 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70272 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70280 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70288 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70296 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70304 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70312 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70320 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70328 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70336 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70344 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70352 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70360 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70368 len:8 PRP1 0x0 PRP2 0x0 00:15:47.228 [2024-11-15 10:59:18.951899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.228 [2024-11-15 10:59:18.951912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.228 [2024-11-15 10:59:18.951921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.228 [2024-11-15 10:59:18.951931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70376 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.951944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.951957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.951966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.951976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70384 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.951989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70392 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70400 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70408 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70416 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70424 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70432 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70440 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70448 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70456 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70464 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70472 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70480 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70488 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70496 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70504 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.952745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.952755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70512 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.952767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.952784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.962294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.962354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70520 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.962387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.962422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.962444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.962465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70528 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.962503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.962592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.962616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.962639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70536 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.962665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.962701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.962710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.962719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70544 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.962732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.962744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.962753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.962764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70552 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.962776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.962789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.229 [2024-11-15 10:59:18.962798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.229 [2024-11-15 10:59:18.962808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70560 len:8 PRP1 0x0 PRP2 0x0 00:15:47.229 [2024-11-15 10:59:18.962820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.229 [2024-11-15 10:59:18.962832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.962841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.962866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70568 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.962878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.962890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.962899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.962908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70576 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.962920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.962949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.962958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.962967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70584 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.962979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.962992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70592 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70600 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70608 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70616 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70624 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70632 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70640 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70648 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70656 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70664 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70672 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70680 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70688 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70696 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70704 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70712 len:8 PRP1 0x0 PRP2 0x0 00:15:47.230 [2024-11-15 10:59:18.963821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.230 [2024-11-15 10:59:18.963836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.230 [2024-11-15 10:59:18.963853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.230 [2024-11-15 10:59:18.963865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70720 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.963878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.963892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.963902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.963912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70728 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.963925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.963939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.963949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.963960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70736 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.963973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.963997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70744 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70752 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70760 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70768 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70776 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70784 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70792 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70800 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69808 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69816 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69824 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69832 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69840 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69848 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.231 [2024-11-15 10:59:18.964871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.231 [2024-11-15 10:59:18.964881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69856 len:8 PRP1 0x0 PRP2 0x0 00:15:47.231 [2024-11-15 10:59:18.964893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.964982] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:47.231 [2024-11-15 10:59:18.965041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.231 [2024-11-15 10:59:18.965062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.965077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.231 [2024-11-15 10:59:18.965090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.965103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.231 [2024-11-15 10:59:18.965114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.965127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.231 [2024-11-15 10:59:18.965139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:18.965151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:47.231 [2024-11-15 10:59:18.965194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7e710 (9): Bad file descriptor 00:15:47.231 [2024-11-15 10:59:18.969144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:47.231 [2024-11-15 10:59:18.998422] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:47.231 7895.50 IOPS, 30.84 MiB/s [2024-11-15T10:59:34.092Z] 8270.00 IOPS, 32.30 MiB/s [2024-11-15T10:59:34.092Z] 8466.50 IOPS, 33.07 MiB/s [2024-11-15T10:59:34.092Z] [2024-11-15 10:59:22.605433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.231 [2024-11-15 10:59:22.605565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:22.605604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.231 [2024-11-15 10:59:22.605657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:22.605678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.231 [2024-11-15 10:59:22.605741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.231 [2024-11-15 10:59:22.605759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.605774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.605791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.605805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.605821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.605836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.605852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.605883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.605900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.605915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.605932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.605947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.605964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.605979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.605997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.232 [2024-11-15 10:59:22.606210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.232 [2024-11-15 10:59:22.606255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.232 [2024-11-15 10:59:22.606287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.232 [2024-11-15 10:59:22.606318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.232 [2024-11-15 10:59:22.606368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.232 [2024-11-15 10:59:22.606402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.232 [2024-11-15 10:59:22.606435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.232 [2024-11-15 10:59:22.606467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.606977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.606991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.607009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.607024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.607040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.607064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.607082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.607097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.607113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.232 [2024-11-15 10:59:22.607128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.232 [2024-11-15 10:59:22.607144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.607190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.607222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.607253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.607285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.607315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.607367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.607399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.607431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.607963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.607988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.608025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.608081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.608116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.233 [2024-11-15 10:59:22.608149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.233 [2024-11-15 10:59:22.608645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.233 [2024-11-15 10:59:22.608677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.608694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.234 [2024-11-15 10:59:22.608711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.608729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.234 [2024-11-15 10:59:22.608744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.608761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.608776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.608794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.608808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.608825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.608841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.608857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.608872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.608889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.608904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.608931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.608947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.608964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.608980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.608997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.609012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.609044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.609076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.609108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.609141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.609173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.609206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.234 [2024-11-15 10:59:22.609239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.234 [2024-11-15 10:59:22.609283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.234 [2024-11-15 10:59:22.609361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.234 [2024-11-15 10:59:22.609421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.234 [2024-11-15 10:59:22.609475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.234 [2024-11-15 10:59:22.609512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.234 [2024-11-15 10:59:22.609547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.234 [2024-11-15 10:59:22.609584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1b9e0 is same with the state(6) to be set 00:15:47.234 [2024-11-15 10:59:22.609636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.234 [2024-11-15 10:59:22.609654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.234 [2024-11-15 10:59:22.609684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:15:47.234 [2024-11-15 10:59:22.609717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.234 [2024-11-15 10:59:22.609747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.234 [2024-11-15 10:59:22.609759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:15:47.234 [2024-11-15 10:59:22.609775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.234 [2024-11-15 10:59:22.609804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.234 [2024-11-15 10:59:22.609817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:15:47.234 [2024-11-15 10:59:22.609832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.234 [2024-11-15 10:59:22.609892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.234 [2024-11-15 10:59:22.609904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:15:47.234 [2024-11-15 10:59:22.609918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.609933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.234 [2024-11-15 10:59:22.609946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.234 [2024-11-15 10:59:22.609957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:15:47.234 [2024-11-15 10:59:22.609985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.610002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.234 [2024-11-15 10:59:22.610013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.234 [2024-11-15 10:59:22.610025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:15:47.234 [2024-11-15 10:59:22.610040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.234 [2024-11-15 10:59:22.610055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.234 [2024-11-15 10:59:22.610066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.234 [2024-11-15 10:59:22.610078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78440 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.610907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.235 [2024-11-15 10:59:22.610920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.235 [2024-11-15 10:59:22.610932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78456 len:8 PRP1 0x0 PRP2 0x0 00:15:47.235 [2024-11-15 10:59:22.610948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.611065] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:47.235 [2024-11-15 10:59:22.611141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.235 [2024-11-15 10:59:22.611166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.611183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.235 [2024-11-15 10:59:22.611199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.611215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.235 [2024-11-15 10:59:22.611230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.611246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.235 [2024-11-15 10:59:22.611260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:22.611276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:47.235 [2024-11-15 10:59:22.611369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7e710 (9): Bad file descriptor 00:15:47.235 [2024-11-15 10:59:22.615179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:47.235 [2024-11-15 10:59:22.646277] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:47.235 8491.80 IOPS, 33.17 MiB/s [2024-11-15T10:59:34.096Z] 8622.50 IOPS, 33.68 MiB/s [2024-11-15T10:59:34.096Z] 8821.57 IOPS, 34.46 MiB/s [2024-11-15T10:59:34.096Z] 8950.88 IOPS, 34.96 MiB/s [2024-11-15T10:59:34.096Z] 9047.89 IOPS, 35.34 MiB/s [2024-11-15T10:59:34.096Z] [2024-11-15 10:59:27.176153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.235 [2024-11-15 10:59:27.176244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:27.176280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.235 [2024-11-15 10:59:27.176300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:27.176318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.235 [2024-11-15 10:59:27.176335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:27.176353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.235 [2024-11-15 10:59:27.176368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:27.176386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.235 [2024-11-15 10:59:27.176404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:27.176421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.235 [2024-11-15 10:59:27.176437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:27.176496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.235 [2024-11-15 10:59:27.176514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:27.176548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.235 [2024-11-15 10:59:27.176566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.235 [2024-11-15 10:59:27.176583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.235 [2024-11-15 10:59:27.176599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.176618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.176633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.176650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.176666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.176683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.176699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.176716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.176733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.176750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.176764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.176781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.176796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.176813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.176828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.176845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.176860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.176881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.176898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.176915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.176942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.176974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.176990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.177582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.177616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.177649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.177682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.177715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.177749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.177782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.236 [2024-11-15 10:59:27.177814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.236 [2024-11-15 10:59:27.177973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.236 [2024-11-15 10:59:27.177989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.178437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.178960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.178975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.237 [2024-11-15 10:59:27.179024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.179057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.179089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.179122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.179173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.179207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.179240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.179273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.179306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.179338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.179370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.237 [2024-11-15 10:59:27.179388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.237 [2024-11-15 10:59:27.179403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.238 [2024-11-15 10:59:27.179434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.238 [2024-11-15 10:59:27.179468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.238 [2024-11-15 10:59:27.179500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:47.238 [2024-11-15 10:59:27.179558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.179981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.179997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.180029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.180062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:47.238 [2024-11-15 10:59:27.180113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1b6a0 is same with the state(6) to be set 00:15:47.238 [2024-11-15 10:59:27.180164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46816 len:8 PRP1 0x0 PRP2 0x0 00:15:47.238 [2024-11-15 10:59:27.180217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47376 len:8 PRP1 0x0 PRP2 0x0 00:15:47.238 [2024-11-15 10:59:27.180281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47384 len:8 PRP1 0x0 PRP2 0x0 00:15:47.238 [2024-11-15 10:59:27.180338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47392 len:8 PRP1 0x0 PRP2 0x0 00:15:47.238 [2024-11-15 10:59:27.180394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47400 len:8 PRP1 0x0 PRP2 0x0 00:15:47.238 [2024-11-15 10:59:27.180448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47408 len:8 PRP1 0x0 PRP2 0x0 00:15:47.238 [2024-11-15 10:59:27.180503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47416 len:8 PRP1 0x0 PRP2 0x0 00:15:47.238 [2024-11-15 10:59:27.180600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47424 len:8 PRP1 0x0 PRP2 0x0 00:15:47.238 [2024-11-15 10:59:27.180657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47432 len:8 PRP1 0x0 PRP2 0x0 00:15:47.238 [2024-11-15 10:59:27.180727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47440 len:8 PRP1 0x0 PRP2 0x0 00:15:47.238 [2024-11-15 10:59:27.180780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47448 len:8 PRP1 0x0 PRP2 0x0 00:15:47.238 [2024-11-15 10:59:27.180836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.238 [2024-11-15 10:59:27.180852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.238 [2024-11-15 10:59:27.180863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.238 [2024-11-15 10:59:27.180876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47456 len:8 PRP1 0x0 PRP2 0x0 00:15:47.239 [2024-11-15 10:59:27.180902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.180918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.239 [2024-11-15 10:59:27.180930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.239 [2024-11-15 10:59:27.180941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46824 len:8 PRP1 0x0 PRP2 0x0 00:15:47.239 [2024-11-15 10:59:27.180956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.180972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.239 [2024-11-15 10:59:27.180983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.239 [2024-11-15 10:59:27.180995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46832 len:8 PRP1 0x0 PRP2 0x0 00:15:47.239 [2024-11-15 10:59:27.181010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.181026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.239 [2024-11-15 10:59:27.181037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.239 [2024-11-15 10:59:27.181058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46840 len:8 PRP1 0x0 PRP2 0x0 00:15:47.239 [2024-11-15 10:59:27.181074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.181090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.239 [2024-11-15 10:59:27.181102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.239 [2024-11-15 10:59:27.181114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46848 len:8 PRP1 0x0 PRP2 0x0 00:15:47.239 [2024-11-15 10:59:27.181130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.181146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.239 [2024-11-15 10:59:27.181158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.239 [2024-11-15 10:59:27.181170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46856 len:8 PRP1 0x0 PRP2 0x0 00:15:47.239 [2024-11-15 10:59:27.181185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.181212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.239 [2024-11-15 10:59:27.181224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.239 [2024-11-15 10:59:27.181236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46864 len:8 PRP1 0x0 PRP2 0x0 00:15:47.239 [2024-11-15 10:59:27.181250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.181274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.239 [2024-11-15 10:59:27.181287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.239 [2024-11-15 10:59:27.181299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46872 len:8 PRP1 0x0 PRP2 0x0 00:15:47.239 [2024-11-15 10:59:27.181314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.181330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:47.239 [2024-11-15 10:59:27.181342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:47.239 [2024-11-15 10:59:27.181354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46880 len:8 PRP1 0x0 PRP2 0x0 00:15:47.239 [2024-11-15 10:59:27.181369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.181446] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:47.239 [2024-11-15 10:59:27.181518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.239 [2024-11-15 10:59:27.181567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.181585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.239 [2024-11-15 10:59:27.181601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.181617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.239 [2024-11-15 10:59:27.181632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.181661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.239 [2024-11-15 10:59:27.181678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.239 [2024-11-15 10:59:27.181695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:47.239 [2024-11-15 10:59:27.181737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7e710 (9): Bad file descriptor 00:15:47.239 [2024-11-15 10:59:27.185106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:47.239 [2024-11-15 10:59:27.206930] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:47.239 9099.80 IOPS, 35.55 MiB/s [2024-11-15T10:59:34.100Z] 9175.09 IOPS, 35.84 MiB/s [2024-11-15T10:59:34.100Z] 9239.50 IOPS, 36.09 MiB/s [2024-11-15T10:59:34.100Z] 9311.38 IOPS, 36.37 MiB/s [2024-11-15T10:59:34.100Z] 9383.29 IOPS, 36.65 MiB/s [2024-11-15T10:59:34.100Z] 9442.00 IOPS, 36.88 MiB/s 00:15:47.239 Latency(us) 00:15:47.239 [2024-11-15T10:59:34.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.239 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:47.239 Verification LBA range: start 0x0 length 0x4000 00:15:47.239 NVMe0n1 : 15.01 9442.44 36.88 242.30 0.00 13188.49 595.78 25618.62 00:15:47.239 [2024-11-15T10:59:34.100Z] =================================================================================================================== 00:15:47.239 [2024-11-15T10:59:34.100Z] Total : 9442.44 36.88 242.30 0.00 13188.49 595.78 25618.62 00:15:47.239 Received shutdown signal, test time was about 15.000000 seconds 00:15:47.239 00:15:47.239 Latency(us) 00:15:47.239 [2024-11-15T10:59:34.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.239 [2024-11-15T10:59:34.100Z] =================================================================================================================== 00:15:47.239 [2024-11-15T10:59:34.100Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:47.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75230 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75230 /var/tmp/bdevperf.sock 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75230 ']' 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:47.239 [2024-11-15 10:59:33.738687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:47.239 10:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:47.239 [2024-11-15 10:59:33.986924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:47.239 10:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:47.498 NVMe0n1 00:15:47.498 10:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:47.757 00:15:47.757 10:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:48.323 00:15:48.323 10:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:48.323 10:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:48.582 10:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.582 10:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:51.870 10:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:51.870 10:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:52.128 10:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75299 00:15:52.128 10:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:52.128 10:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75299 00:15:53.067 { 00:15:53.067 "results": [ 00:15:53.067 { 00:15:53.067 "job": "NVMe0n1", 00:15:53.067 "core_mask": "0x1", 00:15:53.067 "workload": "verify", 00:15:53.067 "status": "finished", 00:15:53.067 "verify_range": { 00:15:53.067 "start": 0, 00:15:53.067 "length": 16384 00:15:53.067 }, 00:15:53.067 "queue_depth": 128, 00:15:53.067 "io_size": 4096, 00:15:53.067 "runtime": 1.005492, 00:15:53.067 "iops": 7292.947134338215, 00:15:53.067 "mibps": 28.48807474350865, 00:15:53.067 "io_failed": 0, 00:15:53.067 "io_timeout": 0, 00:15:53.067 "avg_latency_us": 17466.18886379133, 00:15:53.067 "min_latency_us": 1176.669090909091, 00:15:53.067 "max_latency_us": 15847.796363636364 00:15:53.067 } 00:15:53.067 ], 00:15:53.067 "core_count": 1 00:15:53.067 } 00:15:53.067 10:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:53.067 [2024-11-15 10:59:33.176255] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:53.067 [2024-11-15 10:59:33.176355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75230 ] 00:15:53.067 [2024-11-15 10:59:33.316427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.067 [2024-11-15 10:59:33.363166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.067 [2024-11-15 10:59:33.416830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:53.067 [2024-11-15 10:59:35.424298] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:53.067 [2024-11-15 10:59:35.424423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.067 [2024-11-15 10:59:35.424449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.067 [2024-11-15 10:59:35.424467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.067 [2024-11-15 10:59:35.424480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.067 [2024-11-15 10:59:35.424494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.067 [2024-11-15 10:59:35.424505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.067 [2024-11-15 10:59:35.424518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.067 [2024-11-15 10:59:35.424530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.067 [2024-11-15 10:59:35.424573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:53.067 [2024-11-15 10:59:35.424628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:53.067 [2024-11-15 10:59:35.424661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154e710 (9): Bad file descriptor 00:15:53.067 [2024-11-15 10:59:35.430881] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:53.067 Running I/O for 1 seconds... 00:15:53.067 7205.00 IOPS, 28.14 MiB/s 00:15:53.067 Latency(us) 00:15:53.067 [2024-11-15T10:59:39.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.067 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:53.067 Verification LBA range: start 0x0 length 0x4000 00:15:53.067 NVMe0n1 : 1.01 7292.95 28.49 0.00 0.00 17466.19 1176.67 15847.80 00:15:53.067 [2024-11-15T10:59:39.928Z] =================================================================================================================== 00:15:53.067 [2024-11-15T10:59:39.928Z] Total : 7292.95 28.49 0.00 0.00 17466.19 1176.67 15847.80 00:15:53.067 10:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:53.067 10:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.634 10:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.634 10:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.634 10:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:53.893 10:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:54.152 10:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:57.467 10:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:57.467 10:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:57.467 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75230 00:15:57.467 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75230 ']' 00:15:57.467 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75230 00:15:57.467 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:57.467 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.467 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75230 00:15:57.467 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.467 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.467 killing process with pid 75230 00:15:57.467 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75230' 00:15:57.467 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75230 00:15:57.467 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75230 00:15:57.726 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:57.727 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.986 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:57.986 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:57.986 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:57.986 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:57.986 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:57.986 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:57.986 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:57.986 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:57.986 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:57.986 rmmod nvme_tcp 00:15:57.986 rmmod nvme_fabrics 00:15:57.986 rmmod nvme_keyring 00:15:57.986 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74984 ']' 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74984 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 74984 ']' 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 74984 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74984 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:58.244 killing process with pid 74984 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74984' 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 74984 00:15:58.244 10:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 74984 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.502 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:58.761 00:15:58.761 real 0m31.660s 00:15:58.761 user 2m1.583s 00:15:58.761 sys 0m5.739s 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.761 ************************************ 00:15:58.761 END TEST nvmf_failover 00:15:58.761 ************************************ 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.761 ************************************ 00:15:58.761 START TEST nvmf_host_discovery 00:15:58.761 ************************************ 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:58.761 * Looking for test storage... 00:15:58.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:58.761 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:58.762 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.021 --rc genhtml_branch_coverage=1 00:15:59.021 --rc genhtml_function_coverage=1 00:15:59.021 --rc genhtml_legend=1 00:15:59.021 --rc geninfo_all_blocks=1 00:15:59.021 --rc geninfo_unexecuted_blocks=1 00:15:59.021 00:15:59.021 ' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.021 --rc genhtml_branch_coverage=1 00:15:59.021 --rc genhtml_function_coverage=1 00:15:59.021 --rc genhtml_legend=1 00:15:59.021 --rc geninfo_all_blocks=1 00:15:59.021 --rc geninfo_unexecuted_blocks=1 00:15:59.021 00:15:59.021 ' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.021 --rc genhtml_branch_coverage=1 00:15:59.021 --rc genhtml_function_coverage=1 00:15:59.021 --rc genhtml_legend=1 00:15:59.021 --rc geninfo_all_blocks=1 00:15:59.021 --rc geninfo_unexecuted_blocks=1 00:15:59.021 00:15:59.021 ' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:59.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.021 --rc genhtml_branch_coverage=1 00:15:59.021 --rc genhtml_function_coverage=1 00:15:59.021 --rc genhtml_legend=1 00:15:59.021 --rc geninfo_all_blocks=1 00:15:59.021 --rc geninfo_unexecuted_blocks=1 00:15:59.021 00:15:59.021 ' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:59.021 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:59.021 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:59.022 Cannot find device "nvmf_init_br" 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:59.022 Cannot find device "nvmf_init_br2" 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:59.022 Cannot find device "nvmf_tgt_br" 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:59.022 Cannot find device "nvmf_tgt_br2" 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:59.022 Cannot find device "nvmf_init_br" 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:59.022 Cannot find device "nvmf_init_br2" 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:59.022 Cannot find device "nvmf_tgt_br" 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:59.022 Cannot find device "nvmf_tgt_br2" 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:59.022 Cannot find device "nvmf_br" 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:59.022 Cannot find device "nvmf_init_if" 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:59.022 Cannot find device "nvmf_init_if2" 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:59.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:59.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:59.022 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:59.281 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:59.281 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:15:59.281 00:15:59.281 --- 10.0.0.3 ping statistics --- 00:15:59.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.281 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:59.281 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:59.281 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:15:59.281 00:15:59.281 --- 10.0.0.4 ping statistics --- 00:15:59.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.281 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:59.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:59.281 00:15:59.281 --- 10.0.0.1 ping statistics --- 00:15:59.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.281 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:59.281 10:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:59.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:15:59.281 00:15:59.281 --- 10.0.0.2 ping statistics --- 00:15:59.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.281 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75620 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75620 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75620 ']' 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.281 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.281 [2024-11-15 10:59:46.080244] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:59.281 [2024-11-15 10:59:46.080349] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.540 [2024-11-15 10:59:46.228917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.540 [2024-11-15 10:59:46.289346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.540 [2024-11-15 10:59:46.289417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.540 [2024-11-15 10:59:46.289432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.540 [2024-11-15 10:59:46.289443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.540 [2024-11-15 10:59:46.289452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.540 [2024-11-15 10:59:46.289953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.540 [2024-11-15 10:59:46.363882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.800 [2024-11-15 10:59:46.486537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.800 [2024-11-15 10:59:46.494755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.800 null0 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.800 null1 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75649 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75649 /tmp/host.sock 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75649 ']' 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.800 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.800 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.801 [2024-11-15 10:59:46.584058] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:59.801 [2024-11-15 10:59:46.584181] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75649 ] 00:16:00.060 [2024-11-15 10:59:46.736999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.060 [2024-11-15 10:59:46.806662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.060 [2024-11-15 10:59:46.867458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.320 10:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.320 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.580 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.581 [2024-11-15 10:59:47.290926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.581 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:16:00.840 10:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:16:01.407 [2024-11-15 10:59:47.960109] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:01.407 [2024-11-15 10:59:47.960146] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:01.407 [2024-11-15 10:59:47.960169] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:01.407 [2024-11-15 10:59:47.966161] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:01.407 [2024-11-15 10:59:48.020600] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:01.407 [2024-11-15 10:59:48.021638] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f53e50:1 started. 00:16:01.407 [2024-11-15 10:59:48.023674] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:01.407 [2024-11-15 10:59:48.023712] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:01.407 [2024-11-15 10:59:48.028463] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f53e50 was disconnected and freed. delete nvme_qpair. 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:01.973 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.974 [2024-11-15 10:59:48.762796] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f61f80:1 started. 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.974 [2024-11-15 10:59:48.768812] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f61f80 was disconnected and freed. delete nvme_qpair. 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:01.974 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.231 [2024-11-15 10:59:48.868229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:02.231 [2024-11-15 10:59:48.869324] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:02.231 [2024-11-15 10:59:48.869377] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:02.231 [2024-11-15 10:59:48.875301] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.231 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.231 [2024-11-15 10:59:48.933782] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:16:02.231 [2024-11-15 10:59:48.933838] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:02.232 [2024-11-15 10:59:48.933849] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:02.232 [2024-11-15 10:59:48.933855] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.232 10:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.232 [2024-11-15 10:59:49.072516] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:02.232 [2024-11-15 10:59:49.072585] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:02.232 [2024-11-15 10:59:49.078548] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:16:02.232 [2024-11-15 10:59:49.078590] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:02.232 [2024-11-15 10:59:49.078712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.232 [2024-11-15 10:59:49.078743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.232 [2024-11-15 10:59:49.078756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.232 [2024-11-15 10:59:49.078765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.232 [2024-11-15 10:59:49.078775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.232 [2024-11-15 10:59:49.078783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.232 [2024-11-15 10:59:49.078793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:02.232 [2024-11-15 10:59:49.078802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:02.232 [2024-11-15 10:59:49.078811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f30230 is same with the state(6) to be set 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.232 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.490 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.491 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.749 10:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.684 [2024-11-15 10:59:50.491044] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:03.684 [2024-11-15 10:59:50.491080] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:03.684 [2024-11-15 10:59:50.491115] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:03.684 [2024-11-15 10:59:50.497085] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:03.944 [2024-11-15 10:59:50.555417] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:16:03.944 [2024-11-15 10:59:50.556379] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1f28eb0:1 started. 00:16:03.944 [2024-11-15 10:59:50.558801] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:03.944 [2024-11-15 10:59:50.558856] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.944 [2024-11-15 10:59:50.560313] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1f28eb0 was disconnected and freed. delete nvme_qpair. 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.944 request: 00:16:03.944 { 00:16:03.944 "name": "nvme", 00:16:03.944 "trtype": "tcp", 00:16:03.944 "traddr": "10.0.0.3", 00:16:03.944 "adrfam": "ipv4", 00:16:03.944 "trsvcid": "8009", 00:16:03.944 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:03.944 "wait_for_attach": true, 00:16:03.944 "method": "bdev_nvme_start_discovery", 00:16:03.944 "req_id": 1 00:16:03.944 } 00:16:03.944 Got JSON-RPC error response 00:16:03.944 response: 00:16:03.944 { 00:16:03.944 "code": -17, 00:16:03.944 "message": "File exists" 00:16:03.944 } 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.944 request: 00:16:03.944 { 00:16:03.944 "name": "nvme_second", 00:16:03.944 "trtype": "tcp", 00:16:03.944 "traddr": "10.0.0.3", 00:16:03.944 "adrfam": "ipv4", 00:16:03.944 "trsvcid": "8009", 00:16:03.944 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:03.944 "wait_for_attach": true, 00:16:03.944 "method": "bdev_nvme_start_discovery", 00:16:03.944 "req_id": 1 00:16:03.944 } 00:16:03.944 Got JSON-RPC error response 00:16:03.944 response: 00:16:03.944 { 00:16:03.944 "code": -17, 00:16:03.944 "message": "File exists" 00:16:03.944 } 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:03.944 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.203 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:04.203 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.203 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:04.203 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.203 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:04.203 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.203 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:04.203 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.203 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.203 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.204 10:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.139 [2024-11-15 10:59:51.839191] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:05.139 [2024-11-15 10:59:51.839272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f54e40 with addr=10.0.0.3, port=8010 00:16:05.139 [2024-11-15 10:59:51.839295] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:05.139 [2024-11-15 10:59:51.839305] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:05.139 [2024-11-15 10:59:51.839314] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:06.076 [2024-11-15 10:59:52.839179] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:06.076 [2024-11-15 10:59:52.839249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f54e40 with addr=10.0.0.3, port=8010 00:16:06.076 [2024-11-15 10:59:52.839273] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:06.076 [2024-11-15 10:59:52.839283] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:06.076 [2024-11-15 10:59:52.839291] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:07.010 [2024-11-15 10:59:53.839054] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:07.010 request: 00:16:07.010 { 00:16:07.010 "name": "nvme_second", 00:16:07.010 "trtype": "tcp", 00:16:07.010 "traddr": "10.0.0.3", 00:16:07.010 "adrfam": "ipv4", 00:16:07.010 "trsvcid": "8010", 00:16:07.010 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:07.010 "wait_for_attach": false, 00:16:07.010 "attach_timeout_ms": 3000, 00:16:07.010 "method": "bdev_nvme_start_discovery", 00:16:07.010 "req_id": 1 00:16:07.010 } 00:16:07.010 Got JSON-RPC error response 00:16:07.010 response: 00:16:07.010 { 00:16:07.010 "code": -110, 00:16:07.010 "message": "Connection timed out" 00:16:07.010 } 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:07.010 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.269 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:07.269 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:07.269 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75649 00:16:07.269 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:07.269 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:07.269 10:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:07.269 rmmod nvme_tcp 00:16:07.269 rmmod nvme_fabrics 00:16:07.269 rmmod nvme_keyring 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75620 ']' 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75620 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75620 ']' 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75620 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.269 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75620 00:16:07.528 killing process with pid 75620 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75620' 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75620 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75620 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:07.528 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:07.798 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:07.798 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:07.798 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.798 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:07.799 00:16:07.799 real 0m9.185s 00:16:07.799 user 0m17.210s 00:16:07.799 sys 0m1.958s 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.799 ************************************ 00:16:07.799 END TEST nvmf_host_discovery 00:16:07.799 ************************************ 00:16:07.799 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.088 ************************************ 00:16:08.088 START TEST nvmf_host_multipath_status 00:16:08.088 ************************************ 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:08.088 * Looking for test storage... 00:16:08.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.088 --rc genhtml_branch_coverage=1 00:16:08.088 --rc genhtml_function_coverage=1 00:16:08.088 --rc genhtml_legend=1 00:16:08.088 --rc geninfo_all_blocks=1 00:16:08.088 --rc geninfo_unexecuted_blocks=1 00:16:08.088 00:16:08.088 ' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.088 --rc genhtml_branch_coverage=1 00:16:08.088 --rc genhtml_function_coverage=1 00:16:08.088 --rc genhtml_legend=1 00:16:08.088 --rc geninfo_all_blocks=1 00:16:08.088 --rc geninfo_unexecuted_blocks=1 00:16:08.088 00:16:08.088 ' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.088 --rc genhtml_branch_coverage=1 00:16:08.088 --rc genhtml_function_coverage=1 00:16:08.088 --rc genhtml_legend=1 00:16:08.088 --rc geninfo_all_blocks=1 00:16:08.088 --rc geninfo_unexecuted_blocks=1 00:16:08.088 00:16:08.088 ' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:08.088 --rc genhtml_branch_coverage=1 00:16:08.088 --rc genhtml_function_coverage=1 00:16:08.088 --rc genhtml_legend=1 00:16:08.088 --rc geninfo_all_blocks=1 00:16:08.088 --rc geninfo_unexecuted_blocks=1 00:16:08.088 00:16:08.088 ' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:08.088 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:08.088 Cannot find device "nvmf_init_br" 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:08.088 Cannot find device "nvmf_init_br2" 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:08.088 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:08.349 Cannot find device "nvmf_tgt_br" 00:16:08.349 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:08.349 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.349 Cannot find device "nvmf_tgt_br2" 00:16:08.349 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:08.349 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:08.349 Cannot find device "nvmf_init_br" 00:16:08.349 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:08.349 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:08.349 Cannot find device "nvmf_init_br2" 00:16:08.349 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:08.349 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:08.349 Cannot find device "nvmf_tgt_br" 00:16:08.349 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:08.349 10:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:08.349 Cannot find device "nvmf_tgt_br2" 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:08.349 Cannot find device "nvmf_br" 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:08.349 Cannot find device "nvmf_init_if" 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:08.349 Cannot find device "nvmf_init_if2" 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.349 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.350 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:08.350 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:08.350 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.608 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:08.608 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:08.608 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:08.609 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:08.609 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:16:08.609 00:16:08.609 --- 10.0.0.3 ping statistics --- 00:16:08.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.609 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:08.609 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:08.609 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 00:16:08.609 00:16:08.609 --- 10.0.0.4 ping statistics --- 00:16:08.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.609 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:08.609 00:16:08.609 --- 10.0.0.1 ping statistics --- 00:16:08.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.609 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:08.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:16:08.609 00:16:08.609 --- 10.0.0.2 ping statistics --- 00:16:08.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.609 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76148 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76148 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76148 ']' 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.609 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:08.609 [2024-11-15 10:59:55.359729] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:16:08.609 [2024-11-15 10:59:55.359824] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.869 [2024-11-15 10:59:55.509642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:08.869 [2024-11-15 10:59:55.580843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.869 [2024-11-15 10:59:55.580904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.869 [2024-11-15 10:59:55.580918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.869 [2024-11-15 10:59:55.580928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.869 [2024-11-15 10:59:55.580938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.869 [2024-11-15 10:59:55.582334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.869 [2024-11-15 10:59:55.582365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.869 [2024-11-15 10:59:55.641052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:08.869 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.869 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:08.869 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:08.869 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:08.869 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:09.128 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.128 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76148 00:16:09.128 10:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:09.388 [2024-11-15 10:59:56.040263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.388 10:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:09.647 Malloc0 00:16:09.647 10:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:09.906 10:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:10.165 10:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:10.423 [2024-11-15 10:59:57.195977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:10.423 10:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:10.682 [2024-11-15 10:59:57.448300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:10.682 10:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76196 00:16:10.682 10:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:10.682 10:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.682 10:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76196 /var/tmp/bdevperf.sock 00:16:10.682 10:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76196 ']' 00:16:10.682 10:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.682 10:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.682 10:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.682 10:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.682 10:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:12.059 10:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.059 10:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:12.059 10:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:12.059 10:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:12.317 Nvme0n1 00:16:12.318 10:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:12.576 Nvme0n1 00:16:12.835 10:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:12.835 10:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:14.740 11:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:14.740 11:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:14.998 11:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:15.259 11:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:16.642 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:16.642 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:16.642 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.642 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:16.642 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.642 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:16.642 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.642 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:16.900 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:16.900 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:16.900 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.900 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:17.159 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.159 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:17.159 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.159 11:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:17.417 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.417 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:17.417 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.417 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:17.676 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.676 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:17.676 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.676 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:17.935 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.936 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:17.936 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:18.194 11:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:18.453 11:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:19.831 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:19.831 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:19.831 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.831 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:19.831 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:19.831 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:19.831 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.831 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:20.090 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.090 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:20.090 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.090 11:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:20.349 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.349 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:20.349 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.349 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:20.608 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.608 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:20.608 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.608 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:20.866 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.866 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:20.866 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.866 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:21.126 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.126 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:21.126 11:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:21.384 11:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:21.643 11:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:22.630 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:22.630 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:22.631 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.631 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:22.889 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.889 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:22.889 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:22.889 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.457 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.457 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:23.457 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.457 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:23.457 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.457 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:23.457 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.457 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:23.717 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.717 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:23.717 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.717 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:23.975 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.975 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:23.975 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.975 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:24.234 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.234 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:24.234 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:24.801 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:24.801 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:26.177 11:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:26.177 11:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:26.177 11:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.177 11:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:26.177 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.177 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:26.177 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.177 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:26.436 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:26.436 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:26.436 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.436 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:27.004 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.004 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:27.004 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:27.004 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.004 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.004 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:27.004 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.004 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:27.262 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.262 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:27.262 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:27.262 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.520 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.520 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:27.521 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:27.779 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:28.043 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:29.427 11:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:29.427 11:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:29.427 11:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.427 11:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:29.427 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:29.427 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:29.427 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.427 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:29.686 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:29.686 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:29.686 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.686 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:29.945 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.945 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:29.945 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.945 11:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:30.204 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.204 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:30.204 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:30.204 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.462 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:30.462 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:30.463 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.463 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:30.722 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:30.722 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:30.722 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:30.981 11:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:31.547 11:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:32.483 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:32.483 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:32.483 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.483 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:32.742 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.742 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:32.742 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.742 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:33.002 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.002 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:33.002 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.002 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:33.261 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.261 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:33.261 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.261 11:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:33.521 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.521 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:33.521 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:33.521 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.779 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:33.779 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:33.779 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:33.779 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.038 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.038 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:34.038 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:34.038 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:34.606 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:34.606 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:35.600 11:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:35.600 11:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:35.600 11:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.600 11:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:36.168 11:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.168 11:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:36.168 11:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.168 11:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:36.168 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.168 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:36.168 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.168 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:36.427 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.427 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:36.427 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.427 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:36.686 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.686 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:36.686 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.686 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:36.945 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.945 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:36.945 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.945 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:37.203 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.203 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:37.203 11:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:37.462 11:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:37.721 11:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:38.660 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:38.660 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:38.660 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.660 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:38.920 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:38.920 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:38.920 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.920 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:39.178 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.178 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:39.178 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.178 11:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:39.746 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.746 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:39.746 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:39.746 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.746 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.746 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:39.746 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.746 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:40.315 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.315 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:40.315 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.315 11:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:40.315 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.315 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:40.315 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:40.575 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:40.834 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:42.212 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:42.212 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:42.212 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.212 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:42.212 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.212 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:42.212 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.212 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:42.471 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.471 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:42.471 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.471 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:42.729 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.729 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:42.729 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.729 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:42.987 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.987 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:42.987 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.987 11:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:43.245 11:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.245 11:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:43.245 11:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:43.245 11:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.504 11:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.504 11:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:43.504 11:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:43.762 11:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:44.112 11:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:45.066 11:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:45.066 11:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:45.066 11:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.066 11:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:45.325 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.325 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:45.325 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.325 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:45.584 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:45.584 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:45.584 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.584 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:45.842 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.842 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:45.842 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.842 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:46.104 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.104 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:46.104 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.104 11:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:46.364 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.364 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:46.364 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.364 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76196 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76196 ']' 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76196 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76196 00:16:46.624 killing process with pid 76196 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76196' 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76196 00:16:46.624 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76196 00:16:46.624 { 00:16:46.624 "results": [ 00:16:46.624 { 00:16:46.624 "job": "Nvme0n1", 00:16:46.624 "core_mask": "0x4", 00:16:46.624 "workload": "verify", 00:16:46.624 "status": "terminated", 00:16:46.624 "verify_range": { 00:16:46.624 "start": 0, 00:16:46.624 "length": 16384 00:16:46.624 }, 00:16:46.624 "queue_depth": 128, 00:16:46.624 "io_size": 4096, 00:16:46.624 "runtime": 33.850319, 00:16:46.624 "iops": 9233.472807154343, 00:16:46.624 "mibps": 36.06825315294665, 00:16:46.624 "io_failed": 0, 00:16:46.624 "io_timeout": 0, 00:16:46.624 "avg_latency_us": 13833.301454354652, 00:16:46.624 "min_latency_us": 355.60727272727274, 00:16:46.624 "max_latency_us": 4026531.84 00:16:46.624 } 00:16:46.624 ], 00:16:46.624 "core_count": 1 00:16:46.624 } 00:16:46.891 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76196 00:16:46.891 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:46.891 [2024-11-15 10:59:57.524839] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:16:46.891 [2024-11-15 10:59:57.524994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76196 ] 00:16:46.891 [2024-11-15 10:59:57.675895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.891 [2024-11-15 10:59:57.734520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.891 [2024-11-15 10:59:57.791915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:46.891 Running I/O for 90 seconds... 00:16:46.891 7068.00 IOPS, 27.61 MiB/s [2024-11-15T11:00:33.752Z] 7812.00 IOPS, 30.52 MiB/s [2024-11-15T11:00:33.752Z] 8184.00 IOPS, 31.97 MiB/s [2024-11-15T11:00:33.752Z] 8372.00 IOPS, 32.70 MiB/s [2024-11-15T11:00:33.752Z] 8462.40 IOPS, 33.06 MiB/s [2024-11-15T11:00:33.752Z] 8476.17 IOPS, 33.11 MiB/s [2024-11-15T11:00:33.752Z] 8541.86 IOPS, 33.37 MiB/s [2024-11-15T11:00:33.752Z] 8588.12 IOPS, 33.55 MiB/s [2024-11-15T11:00:33.752Z] 8632.22 IOPS, 33.72 MiB/s [2024-11-15T11:00:33.752Z] 8726.60 IOPS, 34.09 MiB/s [2024-11-15T11:00:33.752Z] 8907.82 IOPS, 34.80 MiB/s [2024-11-15T11:00:33.752Z] 9072.17 IOPS, 35.44 MiB/s [2024-11-15T11:00:33.752Z] 9187.23 IOPS, 35.89 MiB/s [2024-11-15T11:00:33.752Z] 9266.43 IOPS, 36.20 MiB/s [2024-11-15T11:00:33.752Z] [2024-11-15 11:00:14.537647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.537724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.537827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.537849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.537871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.537886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.537906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.537920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.537941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.537955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.537976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.537990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.538024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.538058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.538093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.538159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.538194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.538228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.538265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.538300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.891 [2024-11-15 11:00:14.538334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.891 [2024-11-15 11:00:14.538383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.891 [2024-11-15 11:00:14.538418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.891 [2024-11-15 11:00:14.538453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.891 [2024-11-15 11:00:14.538489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.891 [2024-11-15 11:00:14.538524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.891 [2024-11-15 11:00:14.538559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.891 [2024-11-15 11:00:14.538620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.891 [2024-11-15 11:00:14.538657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:46.891 [2024-11-15 11:00:14.538678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.538693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.538746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.538766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.538787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.538802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.538822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.538837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.538857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.538871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.538892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.538907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.538927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.538942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.538962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.538977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.538997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.892 [2024-11-15 11:00:14.539350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.892 [2024-11-15 11:00:14.539385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.892 [2024-11-15 11:00:14.539419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.892 [2024-11-15 11:00:14.539454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.892 [2024-11-15 11:00:14.539489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.892 [2024-11-15 11:00:14.539531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.892 [2024-11-15 11:00:14.539588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.892 [2024-11-15 11:00:14.539624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.539971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.539992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.540008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:46.892 [2024-11-15 11:00:14.540030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.892 [2024-11-15 11:00:14.540045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.540102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.540176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.540228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.540263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.540300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.540336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.540371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.540407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.540444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.540976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.540990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.541011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.541025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.541045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.893 [2024-11-15 11:00:14.541067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.541089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.541104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.541124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.541138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.541158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.541173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.541193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.541207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.541227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.541242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.541266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.541282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.541302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.541316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.541336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.893 [2024-11-15 11:00:14.541351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:46.893 [2024-11-15 11:00:14.541403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.894 [2024-11-15 11:00:14.541419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.894 [2024-11-15 11:00:14.541468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.894 [2024-11-15 11:00:14.541505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.894 [2024-11-15 11:00:14.541550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.894 [2024-11-15 11:00:14.541603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.541641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.541678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.541714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.541765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.541815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.541850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.541884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.541918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.541953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.541973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.541987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.542466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.542481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.543174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.894 [2024-11-15 11:00:14.543202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.543233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.894 [2024-11-15 11:00:14.543250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.543277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.894 [2024-11-15 11:00:14.543292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.543319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.894 [2024-11-15 11:00:14.543333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.543376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.894 [2024-11-15 11:00:14.543391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.543424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.894 [2024-11-15 11:00:14.543444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:46.894 [2024-11-15 11:00:14.543473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.894 [2024-11-15 11:00:14.543488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:14.543516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:14.543532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:14.543587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:14.543609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:46.895 9308.40 IOPS, 36.36 MiB/s [2024-11-15T11:00:33.756Z] 8726.62 IOPS, 34.09 MiB/s [2024-11-15T11:00:33.756Z] 8213.29 IOPS, 32.08 MiB/s [2024-11-15T11:00:33.756Z] 7757.00 IOPS, 30.30 MiB/s [2024-11-15T11:00:33.756Z] 7372.68 IOPS, 28.80 MiB/s [2024-11-15T11:00:33.756Z] 7532.55 IOPS, 29.42 MiB/s [2024-11-15T11:00:33.756Z] 7655.38 IOPS, 29.90 MiB/s [2024-11-15T11:00:33.756Z] 7838.77 IOPS, 30.62 MiB/s [2024-11-15T11:00:33.756Z] 8085.61 IOPS, 31.58 MiB/s [2024-11-15T11:00:33.756Z] 8316.58 IOPS, 32.49 MiB/s [2024-11-15T11:00:33.756Z] 8476.16 IOPS, 33.11 MiB/s [2024-11-15T11:00:33.756Z] 8543.08 IOPS, 33.37 MiB/s [2024-11-15T11:00:33.756Z] 8593.78 IOPS, 33.57 MiB/s [2024-11-15T11:00:33.756Z] 8637.14 IOPS, 33.74 MiB/s [2024-11-15T11:00:33.756Z] 8796.86 IOPS, 34.36 MiB/s [2024-11-15T11:00:33.756Z] 8954.87 IOPS, 34.98 MiB/s [2024-11-15T11:00:33.756Z] 9106.74 IOPS, 35.57 MiB/s [2024-11-15T11:00:33.756Z] [2024-11-15 11:00:30.784151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.784212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.784340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.784378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.784410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.784442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.784475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.784507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.784539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.784588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.784621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.784653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.784686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.784719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.784751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.784794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.784827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.784860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.784893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.784926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.784958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.784980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.784995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.785014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.785028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.785048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.785062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.785100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.785120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.785140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.895 [2024-11-15 11:00:30.785154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.785173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.785187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.785216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.785231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.785250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.895 [2024-11-15 11:00:30.785264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:46.895 [2024-11-15 11:00:30.785283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.785297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.785329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.785363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.785396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.785429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.785462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.785495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.785540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.785582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.785615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.785668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.785701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.785734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.785767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.785799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.785832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.785851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.785865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.787054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.787094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.787129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.787164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.787198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.787242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.787277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.787310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.787344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.896 [2024-11-15 11:00:30.787410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.787445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.787480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:46.896 [2024-11-15 11:00:30.787500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.896 [2024-11-15 11:00:30.787515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:46.896 9165.62 IOPS, 35.80 MiB/s [2024-11-15T11:00:33.757Z] 9204.24 IOPS, 35.95 MiB/s [2024-11-15T11:00:33.758Z] Received shutdown signal, test time was about 33.851236 seconds 00:16:46.897 00:16:46.897 Latency(us) 00:16:46.897 [2024-11-15T11:00:33.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.897 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:46.897 Verification LBA range: start 0x0 length 0x4000 00:16:46.897 Nvme0n1 : 33.85 9233.47 36.07 0.00 0.00 13833.30 355.61 4026531.84 00:16:46.897 [2024-11-15T11:00:33.758Z] =================================================================================================================== 00:16:46.897 [2024-11-15T11:00:33.758Z] Total : 9233.47 36.07 0.00 0.00 13833.30 355.61 4026531.84 00:16:46.897 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.155 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:47.155 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:47.155 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:47.155 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:47.155 11:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:47.414 rmmod nvme_tcp 00:16:47.414 rmmod nvme_fabrics 00:16:47.414 rmmod nvme_keyring 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76148 ']' 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76148 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76148 ']' 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76148 00:16:47.414 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:16:47.415 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.415 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76148 00:16:47.415 killing process with pid 76148 00:16:47.415 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.415 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.415 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76148' 00:16:47.415 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76148 00:16:47.415 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76148 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:47.985 ************************************ 00:16:47.985 END TEST nvmf_host_multipath_status 00:16:47.985 ************************************ 00:16:47.985 00:16:47.985 real 0m40.064s 00:16:47.985 user 2m8.483s 00:16:47.985 sys 0m12.428s 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.985 ************************************ 00:16:47.985 START TEST nvmf_discovery_remove_ifc 00:16:47.985 ************************************ 00:16:47.985 11:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:48.247 * Looking for test storage... 00:16:48.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:48.247 11:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:48.247 11:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:48.247 11:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:48.247 11:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:48.247 11:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.247 11:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.247 11:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.247 11:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.247 11:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:48.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.247 --rc genhtml_branch_coverage=1 00:16:48.247 --rc genhtml_function_coverage=1 00:16:48.247 --rc genhtml_legend=1 00:16:48.247 --rc geninfo_all_blocks=1 00:16:48.247 --rc geninfo_unexecuted_blocks=1 00:16:48.247 00:16:48.247 ' 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:48.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.247 --rc genhtml_branch_coverage=1 00:16:48.247 --rc genhtml_function_coverage=1 00:16:48.247 --rc genhtml_legend=1 00:16:48.247 --rc geninfo_all_blocks=1 00:16:48.247 --rc geninfo_unexecuted_blocks=1 00:16:48.247 00:16:48.247 ' 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:48.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.247 --rc genhtml_branch_coverage=1 00:16:48.247 --rc genhtml_function_coverage=1 00:16:48.247 --rc genhtml_legend=1 00:16:48.247 --rc geninfo_all_blocks=1 00:16:48.247 --rc geninfo_unexecuted_blocks=1 00:16:48.247 00:16:48.247 ' 00:16:48.247 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:48.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.247 --rc genhtml_branch_coverage=1 00:16:48.247 --rc genhtml_function_coverage=1 00:16:48.248 --rc genhtml_legend=1 00:16:48.248 --rc geninfo_all_blocks=1 00:16:48.248 --rc geninfo_unexecuted_blocks=1 00:16:48.248 00:16:48.248 ' 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:48.248 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:48.248 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:48.249 Cannot find device "nvmf_init_br" 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:48.249 Cannot find device "nvmf_init_br2" 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:48.249 Cannot find device "nvmf_tgt_br" 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.249 Cannot find device "nvmf_tgt_br2" 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:48.249 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:48.249 Cannot find device "nvmf_init_br" 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:48.508 Cannot find device "nvmf_init_br2" 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:48.508 Cannot find device "nvmf_tgt_br" 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:48.508 Cannot find device "nvmf_tgt_br2" 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:48.508 Cannot find device "nvmf_br" 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:48.508 Cannot find device "nvmf_init_if" 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:48.508 Cannot find device "nvmf_init_if2" 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:48.508 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.509 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:48.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:16:48.768 00:16:48.768 --- 10.0.0.3 ping statistics --- 00:16:48.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.768 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:48.768 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:48.768 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:16:48.768 00:16:48.768 --- 10.0.0.4 ping statistics --- 00:16:48.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.768 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:48.768 00:16:48.768 --- 10.0.0.1 ping statistics --- 00:16:48.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.768 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:48.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:16:48.768 00:16:48.768 --- 10.0.0.2 ping statistics --- 00:16:48.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.768 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77073 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77073 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77073 ']' 00:16:48.768 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.769 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.769 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.769 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.769 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:48.769 [2024-11-15 11:00:35.524519] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:16:48.769 [2024-11-15 11:00:35.524631] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.028 [2024-11-15 11:00:35.679860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.028 [2024-11-15 11:00:35.740995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.028 [2024-11-15 11:00:35.741257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.028 [2024-11-15 11:00:35.741341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.028 [2024-11-15 11:00:35.741436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.028 [2024-11-15 11:00:35.741549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.028 [2024-11-15 11:00:35.742113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.028 [2024-11-15 11:00:35.800876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:49.028 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.028 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:49.028 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.028 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.028 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.286 [2024-11-15 11:00:35.922753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.286 [2024-11-15 11:00:35.930907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:49.286 null0 00:16:49.286 [2024-11-15 11:00:35.962814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77097 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77097 /tmp/host.sock 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77097 ']' 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:49.286 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.286 11:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.286 [2024-11-15 11:00:36.048017] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:16:49.286 [2024-11-15 11:00:36.048105] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77097 ] 00:16:49.545 [2024-11-15 11:00:36.200744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.545 [2024-11-15 11:00:36.257337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.545 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.545 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:16:49.545 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.545 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:49.545 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.545 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.545 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.545 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:49.545 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.545 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.545 [2024-11-15 11:00:36.364182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:49.804 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.804 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:49.804 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.804 11:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 [2024-11-15 11:00:37.426126] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:50.738 [2024-11-15 11:00:37.426177] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:50.738 [2024-11-15 11:00:37.426201] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:50.738 [2024-11-15 11:00:37.432201] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:50.738 [2024-11-15 11:00:37.487014] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:50.738 [2024-11-15 11:00:37.488175] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a9efb0:1 started. 00:16:50.738 [2024-11-15 11:00:37.490200] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:50.738 [2024-11-15 11:00:37.490262] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:50.738 [2024-11-15 11:00:37.490287] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:50.738 [2024-11-15 11:00:37.490304] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:50.738 [2024-11-15 11:00:37.490342] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:50.738 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.738 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:50.738 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:50.738 [2024-11-15 11:00:37.494713] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a9efb0 was disconnected and freed. delete nvme_qpair. 00:16:50.738 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:50.739 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.996 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:50.996 11:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:51.931 11:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:51.931 11:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:51.931 11:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.931 11:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.931 11:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:51.931 11:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:51.932 11:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.932 11:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.932 11:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:51.932 11:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:52.867 11:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.867 11:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.867 11:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.867 11:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.867 11:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.867 11:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.867 11:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:52.867 11:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.126 11:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:53.126 11:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:54.061 11:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.061 11:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.061 11:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.061 11:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.062 11:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.062 11:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.062 11:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.062 11:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.062 11:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:54.062 11:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:54.999 11:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.999 11:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.999 11:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.999 11:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.999 11:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.999 11:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.999 11:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.999 11:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.258 11:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:55.258 11:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.192 11:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.192 11:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.192 11:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.192 11:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.192 11:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.192 11:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.192 11:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 11:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.192 [2024-11-15 11:00:42.918243] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:56.192 [2024-11-15 11:00:42.918347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.192 [2024-11-15 11:00:42.918363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.192 [2024-11-15 11:00:42.918391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.192 [2024-11-15 11:00:42.918401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.192 [2024-11-15 11:00:42.918411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.192 [2024-11-15 11:00:42.918421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.192 [2024-11-15 11:00:42.918431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.192 [2024-11-15 11:00:42.918440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.192 [2024-11-15 11:00:42.918451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.192 [2024-11-15 11:00:42.918476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.192 [2024-11-15 11:00:42.918485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b240 is same with the state(6) to be set 00:16:56.192 11:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:56.192 11:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.192 [2024-11-15 11:00:42.928238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7b240 (9): Bad file descriptor 00:16:56.192 [2024-11-15 11:00:42.938255] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:56.192 [2024-11-15 11:00:42.938295] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:56.192 [2024-11-15 11:00:42.938305] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:56.192 [2024-11-15 11:00:42.938311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:56.192 [2024-11-15 11:00:42.938364] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:57.125 11:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.125 11:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.125 11:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.125 11:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.125 11:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.125 11:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.125 11:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.125 [2024-11-15 11:00:43.962635] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:57.125 [2024-11-15 11:00:43.962701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a7b240 with addr=10.0.0.3, port=4420 00:16:57.125 [2024-11-15 11:00:43.962734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7b240 is same with the state(6) to be set 00:16:57.125 [2024-11-15 11:00:43.962791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7b240 (9): Bad file descriptor 00:16:57.125 [2024-11-15 11:00:43.963275] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:57.125 [2024-11-15 11:00:43.963324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:57.125 [2024-11-15 11:00:43.963337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:57.125 [2024-11-15 11:00:43.963349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:57.125 [2024-11-15 11:00:43.963359] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:57.125 [2024-11-15 11:00:43.963366] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:57.125 [2024-11-15 11:00:43.963371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:57.125 [2024-11-15 11:00:43.963382] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:57.125 [2024-11-15 11:00:43.963389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:57.125 11:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.384 11:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:57.384 11:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:58.320 [2024-11-15 11:00:44.963429] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:58.320 [2024-11-15 11:00:44.963489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:58.320 [2024-11-15 11:00:44.963516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:58.320 [2024-11-15 11:00:44.963539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:58.320 [2024-11-15 11:00:44.963552] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:16:58.320 [2024-11-15 11:00:44.963563] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:58.320 [2024-11-15 11:00:44.963570] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:58.320 [2024-11-15 11:00:44.963577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:58.320 [2024-11-15 11:00:44.963611] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:58.320 [2024-11-15 11:00:44.963656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.320 [2024-11-15 11:00:44.963672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.321 [2024-11-15 11:00:44.963686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.321 [2024-11-15 11:00:44.963696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.321 [2024-11-15 11:00:44.963706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.321 [2024-11-15 11:00:44.963716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.321 [2024-11-15 11:00:44.963726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.321 [2024-11-15 11:00:44.963735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.321 [2024-11-15 11:00:44.963745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.321 [2024-11-15 11:00:44.963755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.321 [2024-11-15 11:00:44.963765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:16:58.321 [2024-11-15 11:00:44.963826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a06a20 (9): Bad file descriptor 00:16:58.321 [2024-11-15 11:00:44.964796] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:58.321 [2024-11-15 11:00:44.964823] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:16:58.321 11:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:58.321 11:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.321 11:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:58.321 11:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.321 11:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.321 11:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:58.321 11:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:58.321 11:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:59.696 11:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:59.696 11:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.696 11:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:59.696 11:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.696 11:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:59.696 11:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:59.696 11:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:59.696 11:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.696 11:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:59.696 11:00:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:00.262 [2024-11-15 11:00:46.970853] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:00.262 [2024-11-15 11:00:46.970879] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:00.262 [2024-11-15 11:00:46.970914] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:00.262 [2024-11-15 11:00:46.976893] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:17:00.262 [2024-11-15 11:00:47.031192] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:17:00.262 [2024-11-15 11:00:47.032072] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1aa7290:1 started. 00:17:00.262 [2024-11-15 11:00:47.033446] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:00.262 [2024-11-15 11:00:47.033506] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:00.262 [2024-11-15 11:00:47.033529] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:00.262 [2024-11-15 11:00:47.033555] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:17:00.262 [2024-11-15 11:00:47.033565] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:00.262 [2024-11-15 11:00:47.039476] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1aa7290 was disconnected and freed. delete nvme_qpair. 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77097 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77097 ']' 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77097 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77097 00:17:00.522 killing process with pid 77097 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77097' 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77097 00:17:00.522 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77097 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.780 rmmod nvme_tcp 00:17:00.780 rmmod nvme_fabrics 00:17:00.780 rmmod nvme_keyring 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77073 ']' 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77073 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77073 ']' 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77073 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77073 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:00.780 killing process with pid 77073 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77073' 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77073 00:17:00.780 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77073 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:01.040 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:01.300 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:01.300 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:01.300 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.300 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.300 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:01.300 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.300 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.300 11:00:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.300 11:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:17:01.300 00:17:01.300 real 0m13.205s 00:17:01.300 user 0m22.357s 00:17:01.300 sys 0m2.485s 00:17:01.300 11:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.300 11:00:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:01.300 ************************************ 00:17:01.300 END TEST nvmf_discovery_remove_ifc 00:17:01.300 ************************************ 00:17:01.300 11:00:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:01.300 11:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.300 11:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.300 11:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.300 ************************************ 00:17:01.300 START TEST nvmf_identify_kernel_target 00:17:01.300 ************************************ 00:17:01.300 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:01.300 * Looking for test storage... 00:17:01.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:01.300 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:01.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.560 --rc genhtml_branch_coverage=1 00:17:01.560 --rc genhtml_function_coverage=1 00:17:01.560 --rc genhtml_legend=1 00:17:01.560 --rc geninfo_all_blocks=1 00:17:01.560 --rc geninfo_unexecuted_blocks=1 00:17:01.560 00:17:01.560 ' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:01.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.560 --rc genhtml_branch_coverage=1 00:17:01.560 --rc genhtml_function_coverage=1 00:17:01.560 --rc genhtml_legend=1 00:17:01.560 --rc geninfo_all_blocks=1 00:17:01.560 --rc geninfo_unexecuted_blocks=1 00:17:01.560 00:17:01.560 ' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:01.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.560 --rc genhtml_branch_coverage=1 00:17:01.560 --rc genhtml_function_coverage=1 00:17:01.560 --rc genhtml_legend=1 00:17:01.560 --rc geninfo_all_blocks=1 00:17:01.560 --rc geninfo_unexecuted_blocks=1 00:17:01.560 00:17:01.560 ' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:01.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.560 --rc genhtml_branch_coverage=1 00:17:01.560 --rc genhtml_function_coverage=1 00:17:01.560 --rc genhtml_legend=1 00:17:01.560 --rc geninfo_all_blocks=1 00:17:01.560 --rc geninfo_unexecuted_blocks=1 00:17:01.560 00:17:01.560 ' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.560 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.560 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:01.561 Cannot find device "nvmf_init_br" 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:01.561 Cannot find device "nvmf_init_br2" 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:01.561 Cannot find device "nvmf_tgt_br" 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.561 Cannot find device "nvmf_tgt_br2" 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:01.561 Cannot find device "nvmf_init_br" 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:01.561 Cannot find device "nvmf_init_br2" 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:01.561 Cannot find device "nvmf_tgt_br" 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:01.561 Cannot find device "nvmf_tgt_br2" 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:01.561 Cannot find device "nvmf_br" 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:01.561 Cannot find device "nvmf_init_if" 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:01.561 Cannot find device "nvmf_init_if2" 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:17:01.561 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:01.819 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:01.819 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:01.819 00:17:01.819 --- 10.0.0.3 ping statistics --- 00:17:01.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.819 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:01.819 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:01.819 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:17:01.819 00:17:01.819 --- 10.0.0.4 ping statistics --- 00:17:01.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.819 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:01.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:01.819 00:17:01.819 --- 10.0.0.1 ping statistics --- 00:17:01.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.819 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:01.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:17:01.819 00:17:01.819 --- 10.0.0.2 ping statistics --- 00:17:01.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.819 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.819 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:01.820 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:02.077 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:02.077 11:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:02.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.336 Waiting for block devices as requested 00:17:02.336 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.336 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.594 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:02.594 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:02.594 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:02.594 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:02.594 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:02.594 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:02.595 No valid GPT data, bailing 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:02.595 No valid GPT data, bailing 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:02.595 No valid GPT data, bailing 00:17:02.595 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:02.853 No valid GPT data, bailing 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid=02f14d39-9b07-4abc-bc4a-e88d43a336ca -a 10.0.0.1 -t tcp -s 4420 00:17:02.853 00:17:02.853 Discovery Log Number of Records 2, Generation counter 2 00:17:02.853 =====Discovery Log Entry 0====== 00:17:02.853 trtype: tcp 00:17:02.853 adrfam: ipv4 00:17:02.853 subtype: current discovery subsystem 00:17:02.853 treq: not specified, sq flow control disable supported 00:17:02.853 portid: 1 00:17:02.853 trsvcid: 4420 00:17:02.853 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:02.853 traddr: 10.0.0.1 00:17:02.853 eflags: none 00:17:02.853 sectype: none 00:17:02.853 =====Discovery Log Entry 1====== 00:17:02.853 trtype: tcp 00:17:02.853 adrfam: ipv4 00:17:02.853 subtype: nvme subsystem 00:17:02.853 treq: not specified, sq flow control disable supported 00:17:02.853 portid: 1 00:17:02.853 trsvcid: 4420 00:17:02.853 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:02.853 traddr: 10.0.0.1 00:17:02.853 eflags: none 00:17:02.853 sectype: none 00:17:02.853 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:02.853 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:03.112 ===================================================== 00:17:03.112 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:03.112 ===================================================== 00:17:03.112 Controller Capabilities/Features 00:17:03.112 ================================ 00:17:03.112 Vendor ID: 0000 00:17:03.112 Subsystem Vendor ID: 0000 00:17:03.112 Serial Number: 130f58f6b9b17e89fe9a 00:17:03.112 Model Number: Linux 00:17:03.112 Firmware Version: 6.8.9-20 00:17:03.112 Recommended Arb Burst: 0 00:17:03.112 IEEE OUI Identifier: 00 00 00 00:17:03.112 Multi-path I/O 00:17:03.112 May have multiple subsystem ports: No 00:17:03.112 May have multiple controllers: No 00:17:03.112 Associated with SR-IOV VF: No 00:17:03.112 Max Data Transfer Size: Unlimited 00:17:03.112 Max Number of Namespaces: 0 00:17:03.112 Max Number of I/O Queues: 1024 00:17:03.112 NVMe Specification Version (VS): 1.3 00:17:03.112 NVMe Specification Version (Identify): 1.3 00:17:03.112 Maximum Queue Entries: 1024 00:17:03.112 Contiguous Queues Required: No 00:17:03.112 Arbitration Mechanisms Supported 00:17:03.112 Weighted Round Robin: Not Supported 00:17:03.112 Vendor Specific: Not Supported 00:17:03.112 Reset Timeout: 7500 ms 00:17:03.112 Doorbell Stride: 4 bytes 00:17:03.112 NVM Subsystem Reset: Not Supported 00:17:03.112 Command Sets Supported 00:17:03.112 NVM Command Set: Supported 00:17:03.112 Boot Partition: Not Supported 00:17:03.112 Memory Page Size Minimum: 4096 bytes 00:17:03.112 Memory Page Size Maximum: 4096 bytes 00:17:03.112 Persistent Memory Region: Not Supported 00:17:03.112 Optional Asynchronous Events Supported 00:17:03.112 Namespace Attribute Notices: Not Supported 00:17:03.112 Firmware Activation Notices: Not Supported 00:17:03.112 ANA Change Notices: Not Supported 00:17:03.112 PLE Aggregate Log Change Notices: Not Supported 00:17:03.112 LBA Status Info Alert Notices: Not Supported 00:17:03.112 EGE Aggregate Log Change Notices: Not Supported 00:17:03.112 Normal NVM Subsystem Shutdown event: Not Supported 00:17:03.112 Zone Descriptor Change Notices: Not Supported 00:17:03.112 Discovery Log Change Notices: Supported 00:17:03.112 Controller Attributes 00:17:03.112 128-bit Host Identifier: Not Supported 00:17:03.112 Non-Operational Permissive Mode: Not Supported 00:17:03.112 NVM Sets: Not Supported 00:17:03.112 Read Recovery Levels: Not Supported 00:17:03.112 Endurance Groups: Not Supported 00:17:03.112 Predictable Latency Mode: Not Supported 00:17:03.112 Traffic Based Keep ALive: Not Supported 00:17:03.112 Namespace Granularity: Not Supported 00:17:03.112 SQ Associations: Not Supported 00:17:03.112 UUID List: Not Supported 00:17:03.112 Multi-Domain Subsystem: Not Supported 00:17:03.112 Fixed Capacity Management: Not Supported 00:17:03.112 Variable Capacity Management: Not Supported 00:17:03.112 Delete Endurance Group: Not Supported 00:17:03.112 Delete NVM Set: Not Supported 00:17:03.112 Extended LBA Formats Supported: Not Supported 00:17:03.112 Flexible Data Placement Supported: Not Supported 00:17:03.112 00:17:03.112 Controller Memory Buffer Support 00:17:03.112 ================================ 00:17:03.112 Supported: No 00:17:03.112 00:17:03.112 Persistent Memory Region Support 00:17:03.112 ================================ 00:17:03.112 Supported: No 00:17:03.112 00:17:03.112 Admin Command Set Attributes 00:17:03.112 ============================ 00:17:03.112 Security Send/Receive: Not Supported 00:17:03.112 Format NVM: Not Supported 00:17:03.112 Firmware Activate/Download: Not Supported 00:17:03.112 Namespace Management: Not Supported 00:17:03.112 Device Self-Test: Not Supported 00:17:03.112 Directives: Not Supported 00:17:03.112 NVMe-MI: Not Supported 00:17:03.112 Virtualization Management: Not Supported 00:17:03.112 Doorbell Buffer Config: Not Supported 00:17:03.112 Get LBA Status Capability: Not Supported 00:17:03.112 Command & Feature Lockdown Capability: Not Supported 00:17:03.112 Abort Command Limit: 1 00:17:03.112 Async Event Request Limit: 1 00:17:03.112 Number of Firmware Slots: N/A 00:17:03.112 Firmware Slot 1 Read-Only: N/A 00:17:03.112 Firmware Activation Without Reset: N/A 00:17:03.112 Multiple Update Detection Support: N/A 00:17:03.112 Firmware Update Granularity: No Information Provided 00:17:03.112 Per-Namespace SMART Log: No 00:17:03.112 Asymmetric Namespace Access Log Page: Not Supported 00:17:03.112 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:03.112 Command Effects Log Page: Not Supported 00:17:03.112 Get Log Page Extended Data: Supported 00:17:03.112 Telemetry Log Pages: Not Supported 00:17:03.112 Persistent Event Log Pages: Not Supported 00:17:03.112 Supported Log Pages Log Page: May Support 00:17:03.112 Commands Supported & Effects Log Page: Not Supported 00:17:03.112 Feature Identifiers & Effects Log Page:May Support 00:17:03.112 NVMe-MI Commands & Effects Log Page: May Support 00:17:03.112 Data Area 4 for Telemetry Log: Not Supported 00:17:03.112 Error Log Page Entries Supported: 1 00:17:03.112 Keep Alive: Not Supported 00:17:03.112 00:17:03.112 NVM Command Set Attributes 00:17:03.112 ========================== 00:17:03.112 Submission Queue Entry Size 00:17:03.112 Max: 1 00:17:03.112 Min: 1 00:17:03.112 Completion Queue Entry Size 00:17:03.112 Max: 1 00:17:03.112 Min: 1 00:17:03.112 Number of Namespaces: 0 00:17:03.112 Compare Command: Not Supported 00:17:03.112 Write Uncorrectable Command: Not Supported 00:17:03.112 Dataset Management Command: Not Supported 00:17:03.112 Write Zeroes Command: Not Supported 00:17:03.112 Set Features Save Field: Not Supported 00:17:03.112 Reservations: Not Supported 00:17:03.112 Timestamp: Not Supported 00:17:03.112 Copy: Not Supported 00:17:03.112 Volatile Write Cache: Not Present 00:17:03.112 Atomic Write Unit (Normal): 1 00:17:03.112 Atomic Write Unit (PFail): 1 00:17:03.112 Atomic Compare & Write Unit: 1 00:17:03.112 Fused Compare & Write: Not Supported 00:17:03.112 Scatter-Gather List 00:17:03.112 SGL Command Set: Supported 00:17:03.112 SGL Keyed: Not Supported 00:17:03.112 SGL Bit Bucket Descriptor: Not Supported 00:17:03.112 SGL Metadata Pointer: Not Supported 00:17:03.112 Oversized SGL: Not Supported 00:17:03.112 SGL Metadata Address: Not Supported 00:17:03.112 SGL Offset: Supported 00:17:03.112 Transport SGL Data Block: Not Supported 00:17:03.112 Replay Protected Memory Block: Not Supported 00:17:03.112 00:17:03.112 Firmware Slot Information 00:17:03.112 ========================= 00:17:03.112 Active slot: 0 00:17:03.112 00:17:03.112 00:17:03.112 Error Log 00:17:03.112 ========= 00:17:03.112 00:17:03.112 Active Namespaces 00:17:03.112 ================= 00:17:03.112 Discovery Log Page 00:17:03.112 ================== 00:17:03.112 Generation Counter: 2 00:17:03.112 Number of Records: 2 00:17:03.112 Record Format: 0 00:17:03.112 00:17:03.112 Discovery Log Entry 0 00:17:03.112 ---------------------- 00:17:03.112 Transport Type: 3 (TCP) 00:17:03.112 Address Family: 1 (IPv4) 00:17:03.112 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:03.112 Entry Flags: 00:17:03.112 Duplicate Returned Information: 0 00:17:03.112 Explicit Persistent Connection Support for Discovery: 0 00:17:03.112 Transport Requirements: 00:17:03.112 Secure Channel: Not Specified 00:17:03.112 Port ID: 1 (0x0001) 00:17:03.112 Controller ID: 65535 (0xffff) 00:17:03.113 Admin Max SQ Size: 32 00:17:03.113 Transport Service Identifier: 4420 00:17:03.113 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:03.113 Transport Address: 10.0.0.1 00:17:03.113 Discovery Log Entry 1 00:17:03.113 ---------------------- 00:17:03.113 Transport Type: 3 (TCP) 00:17:03.113 Address Family: 1 (IPv4) 00:17:03.113 Subsystem Type: 2 (NVM Subsystem) 00:17:03.113 Entry Flags: 00:17:03.113 Duplicate Returned Information: 0 00:17:03.113 Explicit Persistent Connection Support for Discovery: 0 00:17:03.113 Transport Requirements: 00:17:03.113 Secure Channel: Not Specified 00:17:03.113 Port ID: 1 (0x0001) 00:17:03.113 Controller ID: 65535 (0xffff) 00:17:03.113 Admin Max SQ Size: 32 00:17:03.113 Transport Service Identifier: 4420 00:17:03.113 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:03.113 Transport Address: 10.0.0.1 00:17:03.113 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:03.372 get_feature(0x01) failed 00:17:03.372 get_feature(0x02) failed 00:17:03.372 get_feature(0x04) failed 00:17:03.372 ===================================================== 00:17:03.372 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:03.372 ===================================================== 00:17:03.372 Controller Capabilities/Features 00:17:03.372 ================================ 00:17:03.372 Vendor ID: 0000 00:17:03.372 Subsystem Vendor ID: 0000 00:17:03.372 Serial Number: fb74deeec00119065830 00:17:03.372 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:03.372 Firmware Version: 6.8.9-20 00:17:03.372 Recommended Arb Burst: 6 00:17:03.372 IEEE OUI Identifier: 00 00 00 00:17:03.372 Multi-path I/O 00:17:03.372 May have multiple subsystem ports: Yes 00:17:03.372 May have multiple controllers: Yes 00:17:03.372 Associated with SR-IOV VF: No 00:17:03.372 Max Data Transfer Size: Unlimited 00:17:03.372 Max Number of Namespaces: 1024 00:17:03.372 Max Number of I/O Queues: 128 00:17:03.372 NVMe Specification Version (VS): 1.3 00:17:03.372 NVMe Specification Version (Identify): 1.3 00:17:03.372 Maximum Queue Entries: 1024 00:17:03.372 Contiguous Queues Required: No 00:17:03.372 Arbitration Mechanisms Supported 00:17:03.372 Weighted Round Robin: Not Supported 00:17:03.372 Vendor Specific: Not Supported 00:17:03.372 Reset Timeout: 7500 ms 00:17:03.372 Doorbell Stride: 4 bytes 00:17:03.372 NVM Subsystem Reset: Not Supported 00:17:03.372 Command Sets Supported 00:17:03.372 NVM Command Set: Supported 00:17:03.372 Boot Partition: Not Supported 00:17:03.372 Memory Page Size Minimum: 4096 bytes 00:17:03.372 Memory Page Size Maximum: 4096 bytes 00:17:03.372 Persistent Memory Region: Not Supported 00:17:03.372 Optional Asynchronous Events Supported 00:17:03.372 Namespace Attribute Notices: Supported 00:17:03.372 Firmware Activation Notices: Not Supported 00:17:03.372 ANA Change Notices: Supported 00:17:03.372 PLE Aggregate Log Change Notices: Not Supported 00:17:03.372 LBA Status Info Alert Notices: Not Supported 00:17:03.372 EGE Aggregate Log Change Notices: Not Supported 00:17:03.372 Normal NVM Subsystem Shutdown event: Not Supported 00:17:03.372 Zone Descriptor Change Notices: Not Supported 00:17:03.372 Discovery Log Change Notices: Not Supported 00:17:03.372 Controller Attributes 00:17:03.372 128-bit Host Identifier: Supported 00:17:03.372 Non-Operational Permissive Mode: Not Supported 00:17:03.372 NVM Sets: Not Supported 00:17:03.372 Read Recovery Levels: Not Supported 00:17:03.372 Endurance Groups: Not Supported 00:17:03.372 Predictable Latency Mode: Not Supported 00:17:03.372 Traffic Based Keep ALive: Supported 00:17:03.372 Namespace Granularity: Not Supported 00:17:03.372 SQ Associations: Not Supported 00:17:03.372 UUID List: Not Supported 00:17:03.372 Multi-Domain Subsystem: Not Supported 00:17:03.372 Fixed Capacity Management: Not Supported 00:17:03.372 Variable Capacity Management: Not Supported 00:17:03.372 Delete Endurance Group: Not Supported 00:17:03.372 Delete NVM Set: Not Supported 00:17:03.372 Extended LBA Formats Supported: Not Supported 00:17:03.372 Flexible Data Placement Supported: Not Supported 00:17:03.372 00:17:03.372 Controller Memory Buffer Support 00:17:03.372 ================================ 00:17:03.372 Supported: No 00:17:03.372 00:17:03.372 Persistent Memory Region Support 00:17:03.372 ================================ 00:17:03.372 Supported: No 00:17:03.372 00:17:03.372 Admin Command Set Attributes 00:17:03.372 ============================ 00:17:03.372 Security Send/Receive: Not Supported 00:17:03.372 Format NVM: Not Supported 00:17:03.372 Firmware Activate/Download: Not Supported 00:17:03.372 Namespace Management: Not Supported 00:17:03.372 Device Self-Test: Not Supported 00:17:03.372 Directives: Not Supported 00:17:03.372 NVMe-MI: Not Supported 00:17:03.372 Virtualization Management: Not Supported 00:17:03.372 Doorbell Buffer Config: Not Supported 00:17:03.372 Get LBA Status Capability: Not Supported 00:17:03.372 Command & Feature Lockdown Capability: Not Supported 00:17:03.372 Abort Command Limit: 4 00:17:03.372 Async Event Request Limit: 4 00:17:03.372 Number of Firmware Slots: N/A 00:17:03.372 Firmware Slot 1 Read-Only: N/A 00:17:03.373 Firmware Activation Without Reset: N/A 00:17:03.373 Multiple Update Detection Support: N/A 00:17:03.373 Firmware Update Granularity: No Information Provided 00:17:03.373 Per-Namespace SMART Log: Yes 00:17:03.373 Asymmetric Namespace Access Log Page: Supported 00:17:03.373 ANA Transition Time : 10 sec 00:17:03.373 00:17:03.373 Asymmetric Namespace Access Capabilities 00:17:03.373 ANA Optimized State : Supported 00:17:03.373 ANA Non-Optimized State : Supported 00:17:03.373 ANA Inaccessible State : Supported 00:17:03.373 ANA Persistent Loss State : Supported 00:17:03.373 ANA Change State : Supported 00:17:03.373 ANAGRPID is not changed : No 00:17:03.373 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:03.373 00:17:03.373 ANA Group Identifier Maximum : 128 00:17:03.373 Number of ANA Group Identifiers : 128 00:17:03.373 Max Number of Allowed Namespaces : 1024 00:17:03.373 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:03.373 Command Effects Log Page: Supported 00:17:03.373 Get Log Page Extended Data: Supported 00:17:03.373 Telemetry Log Pages: Not Supported 00:17:03.373 Persistent Event Log Pages: Not Supported 00:17:03.373 Supported Log Pages Log Page: May Support 00:17:03.373 Commands Supported & Effects Log Page: Not Supported 00:17:03.373 Feature Identifiers & Effects Log Page:May Support 00:17:03.373 NVMe-MI Commands & Effects Log Page: May Support 00:17:03.373 Data Area 4 for Telemetry Log: Not Supported 00:17:03.373 Error Log Page Entries Supported: 128 00:17:03.373 Keep Alive: Supported 00:17:03.373 Keep Alive Granularity: 1000 ms 00:17:03.373 00:17:03.373 NVM Command Set Attributes 00:17:03.373 ========================== 00:17:03.373 Submission Queue Entry Size 00:17:03.373 Max: 64 00:17:03.373 Min: 64 00:17:03.373 Completion Queue Entry Size 00:17:03.373 Max: 16 00:17:03.373 Min: 16 00:17:03.373 Number of Namespaces: 1024 00:17:03.373 Compare Command: Not Supported 00:17:03.373 Write Uncorrectable Command: Not Supported 00:17:03.373 Dataset Management Command: Supported 00:17:03.373 Write Zeroes Command: Supported 00:17:03.373 Set Features Save Field: Not Supported 00:17:03.373 Reservations: Not Supported 00:17:03.373 Timestamp: Not Supported 00:17:03.373 Copy: Not Supported 00:17:03.373 Volatile Write Cache: Present 00:17:03.373 Atomic Write Unit (Normal): 1 00:17:03.373 Atomic Write Unit (PFail): 1 00:17:03.373 Atomic Compare & Write Unit: 1 00:17:03.373 Fused Compare & Write: Not Supported 00:17:03.373 Scatter-Gather List 00:17:03.373 SGL Command Set: Supported 00:17:03.373 SGL Keyed: Not Supported 00:17:03.373 SGL Bit Bucket Descriptor: Not Supported 00:17:03.373 SGL Metadata Pointer: Not Supported 00:17:03.373 Oversized SGL: Not Supported 00:17:03.373 SGL Metadata Address: Not Supported 00:17:03.373 SGL Offset: Supported 00:17:03.373 Transport SGL Data Block: Not Supported 00:17:03.373 Replay Protected Memory Block: Not Supported 00:17:03.373 00:17:03.373 Firmware Slot Information 00:17:03.373 ========================= 00:17:03.373 Active slot: 0 00:17:03.373 00:17:03.373 Asymmetric Namespace Access 00:17:03.373 =========================== 00:17:03.373 Change Count : 0 00:17:03.373 Number of ANA Group Descriptors : 1 00:17:03.373 ANA Group Descriptor : 0 00:17:03.373 ANA Group ID : 1 00:17:03.373 Number of NSID Values : 1 00:17:03.373 Change Count : 0 00:17:03.373 ANA State : 1 00:17:03.373 Namespace Identifier : 1 00:17:03.373 00:17:03.373 Commands Supported and Effects 00:17:03.373 ============================== 00:17:03.373 Admin Commands 00:17:03.373 -------------- 00:17:03.373 Get Log Page (02h): Supported 00:17:03.373 Identify (06h): Supported 00:17:03.373 Abort (08h): Supported 00:17:03.373 Set Features (09h): Supported 00:17:03.373 Get Features (0Ah): Supported 00:17:03.373 Asynchronous Event Request (0Ch): Supported 00:17:03.373 Keep Alive (18h): Supported 00:17:03.373 I/O Commands 00:17:03.373 ------------ 00:17:03.373 Flush (00h): Supported 00:17:03.373 Write (01h): Supported LBA-Change 00:17:03.373 Read (02h): Supported 00:17:03.373 Write Zeroes (08h): Supported LBA-Change 00:17:03.373 Dataset Management (09h): Supported 00:17:03.373 00:17:03.373 Error Log 00:17:03.373 ========= 00:17:03.373 Entry: 0 00:17:03.373 Error Count: 0x3 00:17:03.373 Submission Queue Id: 0x0 00:17:03.373 Command Id: 0x5 00:17:03.373 Phase Bit: 0 00:17:03.373 Status Code: 0x2 00:17:03.373 Status Code Type: 0x0 00:17:03.373 Do Not Retry: 1 00:17:03.373 Error Location: 0x28 00:17:03.373 LBA: 0x0 00:17:03.373 Namespace: 0x0 00:17:03.373 Vendor Log Page: 0x0 00:17:03.373 ----------- 00:17:03.373 Entry: 1 00:17:03.373 Error Count: 0x2 00:17:03.373 Submission Queue Id: 0x0 00:17:03.373 Command Id: 0x5 00:17:03.373 Phase Bit: 0 00:17:03.373 Status Code: 0x2 00:17:03.373 Status Code Type: 0x0 00:17:03.373 Do Not Retry: 1 00:17:03.373 Error Location: 0x28 00:17:03.373 LBA: 0x0 00:17:03.373 Namespace: 0x0 00:17:03.373 Vendor Log Page: 0x0 00:17:03.373 ----------- 00:17:03.373 Entry: 2 00:17:03.373 Error Count: 0x1 00:17:03.373 Submission Queue Id: 0x0 00:17:03.373 Command Id: 0x4 00:17:03.373 Phase Bit: 0 00:17:03.373 Status Code: 0x2 00:17:03.373 Status Code Type: 0x0 00:17:03.373 Do Not Retry: 1 00:17:03.373 Error Location: 0x28 00:17:03.373 LBA: 0x0 00:17:03.373 Namespace: 0x0 00:17:03.373 Vendor Log Page: 0x0 00:17:03.373 00:17:03.373 Number of Queues 00:17:03.373 ================ 00:17:03.373 Number of I/O Submission Queues: 128 00:17:03.373 Number of I/O Completion Queues: 128 00:17:03.373 00:17:03.373 ZNS Specific Controller Data 00:17:03.373 ============================ 00:17:03.373 Zone Append Size Limit: 0 00:17:03.373 00:17:03.373 00:17:03.373 Active Namespaces 00:17:03.373 ================= 00:17:03.373 get_feature(0x05) failed 00:17:03.373 Namespace ID:1 00:17:03.373 Command Set Identifier: NVM (00h) 00:17:03.373 Deallocate: Supported 00:17:03.373 Deallocated/Unwritten Error: Not Supported 00:17:03.373 Deallocated Read Value: Unknown 00:17:03.373 Deallocate in Write Zeroes: Not Supported 00:17:03.373 Deallocated Guard Field: 0xFFFF 00:17:03.373 Flush: Supported 00:17:03.373 Reservation: Not Supported 00:17:03.373 Namespace Sharing Capabilities: Multiple Controllers 00:17:03.373 Size (in LBAs): 1310720 (5GiB) 00:17:03.373 Capacity (in LBAs): 1310720 (5GiB) 00:17:03.373 Utilization (in LBAs): 1310720 (5GiB) 00:17:03.373 UUID: bfa0c10c-0dcf-45e6-8f50-77cc888438e9 00:17:03.373 Thin Provisioning: Not Supported 00:17:03.373 Per-NS Atomic Units: Yes 00:17:03.373 Atomic Boundary Size (Normal): 0 00:17:03.373 Atomic Boundary Size (PFail): 0 00:17:03.373 Atomic Boundary Offset: 0 00:17:03.373 NGUID/EUI64 Never Reused: No 00:17:03.373 ANA group ID: 1 00:17:03.373 Namespace Write Protected: No 00:17:03.373 Number of LBA Formats: 1 00:17:03.373 Current LBA Format: LBA Format #00 00:17:03.373 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:03.373 00:17:03.373 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:03.373 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:03.373 11:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:03.373 rmmod nvme_tcp 00:17:03.373 rmmod nvme_fabrics 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:03.373 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:03.374 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:03.374 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:03.374 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:03.374 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.374 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:03.374 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:03.374 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:03.374 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:03.374 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:03.374 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:03.633 11:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:04.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:04.462 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:04.462 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:04.462 00:17:04.462 real 0m3.183s 00:17:04.462 user 0m1.180s 00:17:04.462 sys 0m1.407s 00:17:04.462 11:00:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.462 11:00:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.462 ************************************ 00:17:04.462 END TEST nvmf_identify_kernel_target 00:17:04.462 ************************************ 00:17:04.462 11:00:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:04.462 11:00:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:04.462 11:00:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.462 11:00:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.462 ************************************ 00:17:04.462 START TEST nvmf_auth_host 00:17:04.462 ************************************ 00:17:04.462 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:04.738 * Looking for test storage... 00:17:04.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.738 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:04.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.739 --rc genhtml_branch_coverage=1 00:17:04.739 --rc genhtml_function_coverage=1 00:17:04.739 --rc genhtml_legend=1 00:17:04.739 --rc geninfo_all_blocks=1 00:17:04.739 --rc geninfo_unexecuted_blocks=1 00:17:04.739 00:17:04.739 ' 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:04.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.739 --rc genhtml_branch_coverage=1 00:17:04.739 --rc genhtml_function_coverage=1 00:17:04.739 --rc genhtml_legend=1 00:17:04.739 --rc geninfo_all_blocks=1 00:17:04.739 --rc geninfo_unexecuted_blocks=1 00:17:04.739 00:17:04.739 ' 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:04.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.739 --rc genhtml_branch_coverage=1 00:17:04.739 --rc genhtml_function_coverage=1 00:17:04.739 --rc genhtml_legend=1 00:17:04.739 --rc geninfo_all_blocks=1 00:17:04.739 --rc geninfo_unexecuted_blocks=1 00:17:04.739 00:17:04.739 ' 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:04.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.739 --rc genhtml_branch_coverage=1 00:17:04.739 --rc genhtml_function_coverage=1 00:17:04.739 --rc genhtml_legend=1 00:17:04.739 --rc geninfo_all_blocks=1 00:17:04.739 --rc geninfo_unexecuted_blocks=1 00:17:04.739 00:17:04.739 ' 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:04.739 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:04.739 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:04.740 Cannot find device "nvmf_init_br" 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:04.740 Cannot find device "nvmf_init_br2" 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:04.740 Cannot find device "nvmf_tgt_br" 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.740 Cannot find device "nvmf_tgt_br2" 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:04.740 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:05.020 Cannot find device "nvmf_init_br" 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:05.020 Cannot find device "nvmf_init_br2" 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:05.020 Cannot find device "nvmf_tgt_br" 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:05.020 Cannot find device "nvmf_tgt_br2" 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:05.020 Cannot find device "nvmf_br" 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:05.020 Cannot find device "nvmf_init_if" 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:05.020 Cannot find device "nvmf_init_if2" 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:05.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:05.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:05.020 00:17:05.020 --- 10.0.0.3 ping statistics --- 00:17:05.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.020 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:05.020 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:05.020 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:17:05.020 00:17:05.020 --- 10.0.0.4 ping statistics --- 00:17:05.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.020 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:05.020 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:05.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:05.279 00:17:05.279 --- 10.0.0.1 ping statistics --- 00:17:05.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.279 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:05.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:17:05.279 00:17:05.279 --- 10.0.0.2 ping statistics --- 00:17:05.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.279 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78088 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78088 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78088 ']' 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.279 11:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f1820d04181a7b1bfa26d8945ffee1ce 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.k7i 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f1820d04181a7b1bfa26d8945ffee1ce 0 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f1820d04181a7b1bfa26d8945ffee1ce 0 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f1820d04181a7b1bfa26d8945ffee1ce 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:05.538 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.k7i 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.k7i 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.k7i 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=df7a43b024ec3b47367dd6243c976fd9287c2204cff1be5329c9a831c584320f 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.R4W 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key df7a43b024ec3b47367dd6243c976fd9287c2204cff1be5329c9a831c584320f 3 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 df7a43b024ec3b47367dd6243c976fd9287c2204cff1be5329c9a831c584320f 3 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=df7a43b024ec3b47367dd6243c976fd9287c2204cff1be5329c9a831c584320f 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.R4W 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.R4W 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.R4W 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6feb4afb451be1c9fe54cc1fd96c25f030048413c1157a09 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5Zl 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6feb4afb451be1c9fe54cc1fd96c25f030048413c1157a09 0 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6feb4afb451be1c9fe54cc1fd96c25f030048413c1157a09 0 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6feb4afb451be1c9fe54cc1fd96c25f030048413c1157a09 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:05.798 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5Zl 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5Zl 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.5Zl 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fcb51d7c610aa25f4ef9105e0281f901620faa8163ad2d5e 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.54o 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fcb51d7c610aa25f4ef9105e0281f901620faa8163ad2d5e 2 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fcb51d7c610aa25f4ef9105e0281f901620faa8163ad2d5e 2 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fcb51d7c610aa25f4ef9105e0281f901620faa8163ad2d5e 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.54o 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.54o 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.54o 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=71620bac11ab2b1a6cd7be9310dd1a96 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6an 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 71620bac11ab2b1a6cd7be9310dd1a96 1 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 71620bac11ab2b1a6cd7be9310dd1a96 1 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=71620bac11ab2b1a6cd7be9310dd1a96 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:05.799 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.058 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6an 00:17:06.058 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6an 00:17:06.058 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6an 00:17:06.058 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:06.058 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.058 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0b6c4aa8195673f341f36c47ae164722 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oNe 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0b6c4aa8195673f341f36c47ae164722 1 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0b6c4aa8195673f341f36c47ae164722 1 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0b6c4aa8195673f341f36c47ae164722 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oNe 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oNe 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.oNe 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5881e970592e15305a46b24cf00a16ddb16698155a5325e4 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.dty 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5881e970592e15305a46b24cf00a16ddb16698155a5325e4 2 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5881e970592e15305a46b24cf00a16ddb16698155a5325e4 2 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5881e970592e15305a46b24cf00a16ddb16698155a5325e4 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.dty 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.dty 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.dty 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e7d22d14c8cbcf57bfebda0bb7269364 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.0Cz 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e7d22d14c8cbcf57bfebda0bb7269364 0 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e7d22d14c8cbcf57bfebda0bb7269364 0 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e7d22d14c8cbcf57bfebda0bb7269364 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.0Cz 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.0Cz 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.0Cz 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fe0d5d5b0102ba685984913f2577ea20cebd949882eb1e64128c9f6676872383 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2qk 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fe0d5d5b0102ba685984913f2577ea20cebd949882eb1e64128c9f6676872383 3 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fe0d5d5b0102ba685984913f2577ea20cebd949882eb1e64128c9f6676872383 3 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fe0d5d5b0102ba685984913f2577ea20cebd949882eb1e64128c9f6676872383 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:06.059 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:06.318 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2qk 00:17:06.318 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2qk 00:17:06.318 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.2qk 00:17:06.318 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:06.318 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78088 00:17:06.318 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78088 ']' 00:17:06.318 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.318 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.318 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.318 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.318 11:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.k7i 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.R4W ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.R4W 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.5Zl 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.54o ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.54o 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6an 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.oNe ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oNe 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.dty 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.0Cz ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.0Cz 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2qk 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:06.577 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:06.578 11:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:07.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:07.145 Waiting for block devices as requested 00:17:07.145 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:07.145 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:07.712 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:07.712 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:07.712 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:07.712 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:07.712 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:07.712 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:07.712 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:07.712 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:07.712 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:07.712 No valid GPT data, bailing 00:17:07.712 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:07.712 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:07.713 No valid GPT data, bailing 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:07.713 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:07.974 No valid GPT data, bailing 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:07.974 No valid GPT data, bailing 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid=02f14d39-9b07-4abc-bc4a-e88d43a336ca -a 10.0.0.1 -t tcp -s 4420 00:17:07.974 00:17:07.974 Discovery Log Number of Records 2, Generation counter 2 00:17:07.974 =====Discovery Log Entry 0====== 00:17:07.974 trtype: tcp 00:17:07.974 adrfam: ipv4 00:17:07.974 subtype: current discovery subsystem 00:17:07.974 treq: not specified, sq flow control disable supported 00:17:07.974 portid: 1 00:17:07.974 trsvcid: 4420 00:17:07.974 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:07.974 traddr: 10.0.0.1 00:17:07.974 eflags: none 00:17:07.974 sectype: none 00:17:07.974 =====Discovery Log Entry 1====== 00:17:07.974 trtype: tcp 00:17:07.974 adrfam: ipv4 00:17:07.974 subtype: nvme subsystem 00:17:07.974 treq: not specified, sq flow control disable supported 00:17:07.974 portid: 1 00:17:07.974 trsvcid: 4420 00:17:07.974 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:07.974 traddr: 10.0.0.1 00:17:07.974 eflags: none 00:17:07.974 sectype: none 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:07.974 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:07.975 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:07.975 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:07.975 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.975 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.975 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:07.975 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:07.975 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:07.975 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:07.975 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.975 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.235 nvme0n1 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.235 11:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.235 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.494 nvme0n1 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.494 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.754 nvme0n1 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.754 nvme0n1 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:08.754 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.755 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.014 nvme0n1 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.014 nvme0n1 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.014 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.273 11:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.533 nvme0n1 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.533 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.792 nvme0n1 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:09.792 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.793 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.052 nvme0n1 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.052 nvme0n1 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.052 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.312 11:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.312 nvme0n1 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.312 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:10.880 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:10.880 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.881 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.140 nvme0n1 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.140 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.141 11:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.400 nvme0n1 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.400 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.401 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.401 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.401 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.401 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.401 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.401 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.401 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.401 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.401 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.661 nvme0n1 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.661 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.921 nvme0n1 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.921 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.181 nvme0n1 00:17:12.181 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.181 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.181 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.181 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.181 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.181 11:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.181 11:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:14.082 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:14.082 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.083 nvme0n1 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.083 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.342 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.343 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.343 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.343 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.343 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.343 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.343 11:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.343 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.602 nvme0n1 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.602 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.861 nvme0n1 00:17:14.861 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.861 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.861 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.861 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.861 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.120 11:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.379 nvme0n1 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.379 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.947 nvme0n1 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.947 11:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.526 nvme0n1 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.526 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.527 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.094 nvme0n1 00:17:17.094 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.094 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.094 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.094 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.094 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.094 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.094 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.094 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.095 11:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.661 nvme0n1 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.661 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.662 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 nvme0n1 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.229 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.230 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:18.230 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.230 11:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.798 nvme0n1 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.798 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.799 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.058 nvme0n1 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.058 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.059 nvme0n1 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.059 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.318 11:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.318 nvme0n1 00:17:19.318 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.318 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.318 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.318 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.318 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.318 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.318 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.318 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.319 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.578 nvme0n1 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.578 nvme0n1 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.578 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.838 nvme0n1 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:19.838 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.839 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.099 nvme0n1 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.099 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.359 nvme0n1 00:17:20.359 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.359 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.359 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.359 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.359 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.359 11:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.359 nvme0n1 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.359 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.360 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.360 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.360 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.360 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.360 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.619 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.620 nvme0n1 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.620 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.879 nvme0n1 00:17:20.879 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.880 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.139 nvme0n1 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.139 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.140 11:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.399 nvme0n1 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:21.399 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.400 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.660 nvme0n1 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.660 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.919 nvme0n1 00:17:21.919 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.919 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.920 11:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 nvme0n1 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.488 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.748 nvme0n1 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.748 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.007 nvme0n1 00:17:23.007 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.007 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.007 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.007 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.007 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.007 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.266 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.266 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.266 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.266 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.266 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.266 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.266 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:23.266 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.266 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.266 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:23.266 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.267 11:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.526 nvme0n1 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.526 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.785 nvme0n1 00:17:23.785 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.785 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.785 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.785 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.785 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.045 11:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.614 nvme0n1 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.614 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 nvme0n1 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.182 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.183 11:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.751 nvme0n1 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.751 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.752 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:25.752 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.752 11:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.320 nvme0n1 00:17:26.320 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.320 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.320 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.320 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.320 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.320 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.320 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.320 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.320 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.320 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.579 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.580 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.148 nvme0n1 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.148 nvme0n1 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.148 11:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.148 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.407 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.407 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.407 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.407 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.407 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.407 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.407 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.407 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.408 nvme0n1 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.408 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.668 nvme0n1 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.668 nvme0n1 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.668 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.928 nvme0n1 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.928 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.929 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.188 nvme0n1 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.188 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.189 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.189 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.189 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.189 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.189 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.189 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.189 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.189 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.189 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.189 11:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.447 nvme0n1 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:28.447 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.448 nvme0n1 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.448 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.707 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.708 nvme0n1 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.708 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.967 nvme0n1 00:17:28.967 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.967 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.967 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.967 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.967 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.968 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.227 nvme0n1 00:17:29.227 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.227 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.227 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.227 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.227 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.227 11:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.227 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.487 nvme0n1 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.487 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.746 nvme0n1 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.747 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.007 nvme0n1 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.007 11:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.267 nvme0n1 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.267 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.861 nvme0n1 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.861 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.121 nvme0n1 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:31.121 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.122 11:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.381 nvme0n1 00:17:31.381 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.381 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.381 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.381 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.381 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.381 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.640 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.900 nvme0n1 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.900 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.159 nvme0n1 00:17:32.159 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.159 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.159 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.159 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.159 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.160 11:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjE4MjBkMDQxODFhN2IxYmZhMjZkODk0NWZmZWUxY2VAPr2H: 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: ]] 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGY3YTQzYjAyNGVjM2I0NzM2N2RkNjI0M2M5NzZmZDkyODdjMjIwNGNmZjFiZTUzMjljOWE4MzFjNTg0MzIwZvyzlvY=: 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.419 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.988 nvme0n1 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.988 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.989 11:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.556 nvme0n1 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.556 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.557 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.125 nvme0n1 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg4MWU5NzA1OTJlMTUzMDVhNDZiMjRjZjAwYTE2ZGRiMTY2OTgxNTVhNTMyNWU0h6u0CA==: 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: ]] 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTdkMjJkMTRjOGNiY2Y1N2JmZWJkYTBiYjcyNjkzNjSmxfQN: 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.125 11:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.693 nvme0n1 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmUwZDVkNWIwMTAyYmE2ODU5ODQ5MTNmMjU3N2VhMjBjZWJkOTQ5ODgyZWIxZTY0MTI4YzlmNjY3Njg3MjM4MxpwKII=: 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:34.693 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:34.694 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:34.694 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:34.694 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.694 11:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.262 nvme0n1 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.262 request: 00:17:35.262 { 00:17:35.262 "name": "nvme0", 00:17:35.262 "trtype": "tcp", 00:17:35.262 "traddr": "10.0.0.1", 00:17:35.262 "adrfam": "ipv4", 00:17:35.262 "trsvcid": "4420", 00:17:35.262 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:35.262 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:35.262 "prchk_reftag": false, 00:17:35.262 "prchk_guard": false, 00:17:35.262 "hdgst": false, 00:17:35.262 "ddgst": false, 00:17:35.262 "allow_unrecognized_csi": false, 00:17:35.262 "method": "bdev_nvme_attach_controller", 00:17:35.262 "req_id": 1 00:17:35.262 } 00:17:35.262 Got JSON-RPC error response 00:17:35.262 response: 00:17:35.262 { 00:17:35.262 "code": -5, 00:17:35.262 "message": "Input/output error" 00:17:35.262 } 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.262 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.521 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.522 request: 00:17:35.522 { 00:17:35.522 "name": "nvme0", 00:17:35.522 "trtype": "tcp", 00:17:35.522 "traddr": "10.0.0.1", 00:17:35.522 "adrfam": "ipv4", 00:17:35.522 "trsvcid": "4420", 00:17:35.522 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:35.522 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:35.522 "prchk_reftag": false, 00:17:35.522 "prchk_guard": false, 00:17:35.522 "hdgst": false, 00:17:35.522 "ddgst": false, 00:17:35.522 "dhchap_key": "key2", 00:17:35.522 "allow_unrecognized_csi": false, 00:17:35.522 "method": "bdev_nvme_attach_controller", 00:17:35.522 "req_id": 1 00:17:35.522 } 00:17:35.522 Got JSON-RPC error response 00:17:35.522 response: 00:17:35.522 { 00:17:35.522 "code": -5, 00:17:35.522 "message": "Input/output error" 00:17:35.522 } 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.522 request: 00:17:35.522 { 00:17:35.522 "name": "nvme0", 00:17:35.522 "trtype": "tcp", 00:17:35.522 "traddr": "10.0.0.1", 00:17:35.522 "adrfam": "ipv4", 00:17:35.522 "trsvcid": "4420", 00:17:35.522 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:35.522 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:35.522 "prchk_reftag": false, 00:17:35.522 "prchk_guard": false, 00:17:35.522 "hdgst": false, 00:17:35.522 "ddgst": false, 00:17:35.522 "dhchap_key": "key1", 00:17:35.522 "dhchap_ctrlr_key": "ckey2", 00:17:35.522 "allow_unrecognized_csi": false, 00:17:35.522 "method": "bdev_nvme_attach_controller", 00:17:35.522 "req_id": 1 00:17:35.522 } 00:17:35.522 Got JSON-RPC error response 00:17:35.522 response: 00:17:35.522 { 00:17:35.522 "code": -5, 00:17:35.522 "message": "Input/output error" 00:17:35.522 } 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.522 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.782 nvme0n1 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.782 request: 00:17:35.782 { 00:17:35.782 "name": "nvme0", 00:17:35.782 "dhchap_key": "key1", 00:17:35.782 "dhchap_ctrlr_key": "ckey2", 00:17:35.782 "method": "bdev_nvme_set_keys", 00:17:35.782 "req_id": 1 00:17:35.782 } 00:17:35.782 Got JSON-RPC error response 00:17:35.782 response: 00:17:35.782 { 00:17:35.782 "code": -13, 00:17:35.782 "message": "Permission denied" 00:17:35.782 } 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:35.782 11:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZlYjRhZmI0NTFiZTFjOWZlNTRjYzFmZDk2YzI1ZjAzMDA0ODQxM2MxMTU3YTA5YwNPfg==: 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: ]] 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmNiNTFkN2M2MTBhYTI1ZjRlZjkxMDVlMDI4MWY5MDE2MjBmYWE4MTYzYWQyZDVlOVTYSQ==: 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.160 nvme0n1 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzE2MjBiYWMxMWFiMmIxYTZjZDdiZTkzMTBkZDFhOTbm1UA4: 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: ]] 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGI2YzRhYTgxOTU2NzNmMzQxZjM2YzQ3YWUxNjQ3MjJ9brEL: 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.160 request: 00:17:37.160 { 00:17:37.160 "name": "nvme0", 00:17:37.160 "dhchap_key": "key2", 00:17:37.160 "dhchap_ctrlr_key": "ckey1", 00:17:37.160 "method": "bdev_nvme_set_keys", 00:17:37.160 "req_id": 1 00:17:37.160 } 00:17:37.160 Got JSON-RPC error response 00:17:37.160 response: 00:17:37.160 { 00:17:37.160 "code": -13, 00:17:37.160 "message": "Permission denied" 00:17:37.160 } 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:37.160 11:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:38.096 rmmod nvme_tcp 00:17:38.096 rmmod nvme_fabrics 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78088 ']' 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78088 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78088 ']' 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78088 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.096 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78088 00:17:38.355 killing process with pid 78088 00:17:38.355 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:38.355 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:38.356 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78088' 00:17:38.356 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78088 00:17:38.356 11:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78088 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:38.356 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:38.615 11:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:39.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:39.552 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:39.552 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:39.552 11:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.k7i /tmp/spdk.key-null.5Zl /tmp/spdk.key-sha256.6an /tmp/spdk.key-sha384.dty /tmp/spdk.key-sha512.2qk /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:39.552 11:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:39.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:39.812 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:39.812 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:39.812 00:17:39.812 real 0m35.344s 00:17:39.812 user 0m32.401s 00:17:39.812 sys 0m3.844s 00:17:39.812 11:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.812 11:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.812 ************************************ 00:17:39.812 END TEST nvmf_auth_host 00:17:39.812 ************************************ 00:17:40.072 11:01:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:40.072 11:01:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:40.072 11:01:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:40.072 11:01:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.072 11:01:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.072 ************************************ 00:17:40.072 START TEST nvmf_digest 00:17:40.072 ************************************ 00:17:40.072 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:40.072 * Looking for test storage... 00:17:40.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:40.072 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:40.072 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:17:40.072 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:40.072 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:40.072 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:40.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.073 --rc genhtml_branch_coverage=1 00:17:40.073 --rc genhtml_function_coverage=1 00:17:40.073 --rc genhtml_legend=1 00:17:40.073 --rc geninfo_all_blocks=1 00:17:40.073 --rc geninfo_unexecuted_blocks=1 00:17:40.073 00:17:40.073 ' 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:40.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.073 --rc genhtml_branch_coverage=1 00:17:40.073 --rc genhtml_function_coverage=1 00:17:40.073 --rc genhtml_legend=1 00:17:40.073 --rc geninfo_all_blocks=1 00:17:40.073 --rc geninfo_unexecuted_blocks=1 00:17:40.073 00:17:40.073 ' 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:40.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.073 --rc genhtml_branch_coverage=1 00:17:40.073 --rc genhtml_function_coverage=1 00:17:40.073 --rc genhtml_legend=1 00:17:40.073 --rc geninfo_all_blocks=1 00:17:40.073 --rc geninfo_unexecuted_blocks=1 00:17:40.073 00:17:40.073 ' 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:40.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.073 --rc genhtml_branch_coverage=1 00:17:40.073 --rc genhtml_function_coverage=1 00:17:40.073 --rc genhtml_legend=1 00:17:40.073 --rc geninfo_all_blocks=1 00:17:40.073 --rc geninfo_unexecuted_blocks=1 00:17:40.073 00:17:40.073 ' 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:40.073 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:40.073 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.074 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:40.333 Cannot find device "nvmf_init_br" 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:40.333 Cannot find device "nvmf_init_br2" 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:40.333 Cannot find device "nvmf_tgt_br" 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.333 Cannot find device "nvmf_tgt_br2" 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:40.333 Cannot find device "nvmf_init_br" 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:40.333 Cannot find device "nvmf_init_br2" 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:40.333 11:01:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:40.333 Cannot find device "nvmf_tgt_br" 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:40.333 Cannot find device "nvmf_tgt_br2" 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:40.333 Cannot find device "nvmf_br" 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:40.333 Cannot find device "nvmf_init_if" 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:40.333 Cannot find device "nvmf_init_if2" 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:40.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:40.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:40.333 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:40.593 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:40.593 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:17:40.593 00:17:40.593 --- 10.0.0.3 ping statistics --- 00:17:40.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.593 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:40.593 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:40.593 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:40.593 00:17:40.593 --- 10.0.0.4 ping statistics --- 00:17:40.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.593 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:40.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:40.593 00:17:40.593 --- 10.0.0.1 ping statistics --- 00:17:40.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.593 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:40.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:17:40.593 00:17:40.593 --- 10.0.0.2 ping statistics --- 00:17:40.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.593 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:40.593 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:40.594 ************************************ 00:17:40.594 START TEST nvmf_digest_clean 00:17:40.594 ************************************ 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79707 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79707 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79707 ']' 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.594 11:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:40.594 [2024-11-15 11:01:27.422238] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:17:40.594 [2024-11-15 11:01:27.422339] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.853 [2024-11-15 11:01:27.572165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.853 [2024-11-15 11:01:27.628592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.853 [2024-11-15 11:01:27.628652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.853 [2024-11-15 11:01:27.628667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.853 [2024-11-15 11:01:27.628678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.853 [2024-11-15 11:01:27.628687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.853 [2024-11-15 11:01:27.629133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:41.792 [2024-11-15 11:01:28.500663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:41.792 null0 00:17:41.792 [2024-11-15 11:01:28.551234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.792 [2024-11-15 11:01:28.575314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79745 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79745 /var/tmp/bperf.sock 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79745 ']' 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.792 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:41.792 [2024-11-15 11:01:28.636398] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:17:41.792 [2024-11-15 11:01:28.636503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79745 ] 00:17:42.052 [2024-11-15 11:01:28.791252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.052 [2024-11-15 11:01:28.855037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.052 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.052 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:42.052 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:42.052 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:42.052 11:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:42.620 [2024-11-15 11:01:29.227754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:42.620 11:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.620 11:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.880 nvme0n1 00:17:42.880 11:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:42.880 11:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:43.139 Running I/O for 2 seconds... 00:17:45.085 17018.00 IOPS, 66.48 MiB/s [2024-11-15T11:01:31.946Z] 17145.00 IOPS, 66.97 MiB/s 00:17:45.085 Latency(us) 00:17:45.085 [2024-11-15T11:01:31.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.085 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:45.085 nvme0n1 : 2.01 17158.87 67.03 0.00 0.00 7454.74 6613.18 17396.83 00:17:45.085 [2024-11-15T11:01:31.946Z] =================================================================================================================== 00:17:45.085 [2024-11-15T11:01:31.946Z] Total : 17158.87 67.03 0.00 0.00 7454.74 6613.18 17396.83 00:17:45.085 { 00:17:45.085 "results": [ 00:17:45.085 { 00:17:45.085 "job": "nvme0n1", 00:17:45.085 "core_mask": "0x2", 00:17:45.085 "workload": "randread", 00:17:45.085 "status": "finished", 00:17:45.085 "queue_depth": 128, 00:17:45.085 "io_size": 4096, 00:17:45.085 "runtime": 2.005843, 00:17:45.085 "iops": 17158.870360242552, 00:17:45.085 "mibps": 67.02683734469747, 00:17:45.085 "io_failed": 0, 00:17:45.085 "io_timeout": 0, 00:17:45.085 "avg_latency_us": 7454.739041199372, 00:17:45.085 "min_latency_us": 6613.178181818182, 00:17:45.085 "max_latency_us": 17396.82909090909 00:17:45.085 } 00:17:45.085 ], 00:17:45.085 "core_count": 1 00:17:45.085 } 00:17:45.085 11:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:45.085 11:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:45.085 11:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:45.085 11:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:45.085 11:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:45.085 | select(.opcode=="crc32c") 00:17:45.085 | "\(.module_name) \(.executed)"' 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79745 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79745 ']' 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79745 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79745 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:45.343 killing process with pid 79745 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79745' 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79745 00:17:45.343 Received shutdown signal, test time was about 2.000000 seconds 00:17:45.343 00:17:45.343 Latency(us) 00:17:45.343 [2024-11-15T11:01:32.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.343 [2024-11-15T11:01:32.204Z] =================================================================================================================== 00:17:45.343 [2024-11-15T11:01:32.204Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.343 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79745 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79792 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79792 /var/tmp/bperf.sock 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79792 ']' 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.602 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:45.602 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:45.602 Zero copy mechanism will not be used. 00:17:45.602 [2024-11-15 11:01:32.384872] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:17:45.602 [2024-11-15 11:01:32.384980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79792 ] 00:17:45.861 [2024-11-15 11:01:32.527515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.861 [2024-11-15 11:01:32.583156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.861 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.861 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:45.861 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:45.861 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:45.861 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:46.119 [2024-11-15 11:01:32.907312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:46.119 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.119 11:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.687 nvme0n1 00:17:46.687 11:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:46.687 11:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:46.687 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:46.687 Zero copy mechanism will not be used. 00:17:46.687 Running I/O for 2 seconds... 00:17:48.556 8432.00 IOPS, 1054.00 MiB/s [2024-11-15T11:01:35.418Z] 8320.00 IOPS, 1040.00 MiB/s 00:17:48.557 Latency(us) 00:17:48.557 [2024-11-15T11:01:35.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.557 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:48.557 nvme0n1 : 2.00 8316.92 1039.62 0.00 0.00 1920.72 1638.40 6464.23 00:17:48.557 [2024-11-15T11:01:35.418Z] =================================================================================================================== 00:17:48.557 [2024-11-15T11:01:35.418Z] Total : 8316.92 1039.62 0.00 0.00 1920.72 1638.40 6464.23 00:17:48.557 { 00:17:48.557 "results": [ 00:17:48.557 { 00:17:48.557 "job": "nvme0n1", 00:17:48.557 "core_mask": "0x2", 00:17:48.557 "workload": "randread", 00:17:48.557 "status": "finished", 00:17:48.557 "queue_depth": 16, 00:17:48.557 "io_size": 131072, 00:17:48.557 "runtime": 2.002664, 00:17:48.557 "iops": 8316.92186008237, 00:17:48.557 "mibps": 1039.6152325102962, 00:17:48.557 "io_failed": 0, 00:17:48.557 "io_timeout": 0, 00:17:48.557 "avg_latency_us": 1920.719420137979, 00:17:48.557 "min_latency_us": 1638.4, 00:17:48.557 "max_latency_us": 6464.232727272727 00:17:48.557 } 00:17:48.557 ], 00:17:48.557 "core_count": 1 00:17:48.557 } 00:17:48.816 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:48.816 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:48.816 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:48.816 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:48.816 | select(.opcode=="crc32c") 00:17:48.816 | "\(.module_name) \(.executed)"' 00:17:48.816 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:49.075 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:49.075 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:49.075 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:49.075 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:49.075 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79792 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79792 ']' 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79792 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79792 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:49.076 killing process with pid 79792 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79792' 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79792 00:17:49.076 Received shutdown signal, test time was about 2.000000 seconds 00:17:49.076 00:17:49.076 Latency(us) 00:17:49.076 [2024-11-15T11:01:35.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.076 [2024-11-15T11:01:35.937Z] =================================================================================================================== 00:17:49.076 [2024-11-15T11:01:35.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79792 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79850 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79850 /var/tmp/bperf.sock 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79850 ']' 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.076 11:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:49.354 [2024-11-15 11:01:35.972205] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:17:49.354 [2024-11-15 11:01:35.972326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79850 ] 00:17:49.354 [2024-11-15 11:01:36.113745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.354 [2024-11-15 11:01:36.170441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.614 11:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.615 11:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:49.615 11:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:49.615 11:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:49.615 11:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:49.615 [2024-11-15 11:01:36.471839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:49.874 11:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:49.874 11:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.133 nvme0n1 00:17:50.133 11:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:50.133 11:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:50.391 Running I/O for 2 seconds... 00:17:52.266 17019.00 IOPS, 66.48 MiB/s [2024-11-15T11:01:39.127Z] 17780.50 IOPS, 69.46 MiB/s 00:17:52.266 Latency(us) 00:17:52.266 [2024-11-15T11:01:39.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.266 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.266 nvme0n1 : 2.01 17790.68 69.49 0.00 0.00 7181.18 2606.55 17635.14 00:17:52.266 [2024-11-15T11:01:39.127Z] =================================================================================================================== 00:17:52.266 [2024-11-15T11:01:39.127Z] Total : 17790.68 69.49 0.00 0.00 7181.18 2606.55 17635.14 00:17:52.266 { 00:17:52.266 "results": [ 00:17:52.266 { 00:17:52.266 "job": "nvme0n1", 00:17:52.266 "core_mask": "0x2", 00:17:52.266 "workload": "randwrite", 00:17:52.266 "status": "finished", 00:17:52.266 "queue_depth": 128, 00:17:52.266 "io_size": 4096, 00:17:52.266 "runtime": 2.007231, 00:17:52.266 "iops": 17790.677804398198, 00:17:52.266 "mibps": 69.49483517343046, 00:17:52.266 "io_failed": 0, 00:17:52.266 "io_timeout": 0, 00:17:52.266 "avg_latency_us": 7181.176104477991, 00:17:52.266 "min_latency_us": 2606.5454545454545, 00:17:52.266 "max_latency_us": 17635.14181818182 00:17:52.266 } 00:17:52.266 ], 00:17:52.266 "core_count": 1 00:17:52.266 } 00:17:52.266 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:52.266 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:52.266 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:52.266 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:52.266 | select(.opcode=="crc32c") 00:17:52.266 | "\(.module_name) \(.executed)"' 00:17:52.266 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79850 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79850 ']' 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79850 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79850 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:52.526 killing process with pid 79850 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79850' 00:17:52.526 Received shutdown signal, test time was about 2.000000 seconds 00:17:52.526 00:17:52.526 Latency(us) 00:17:52.526 [2024-11-15T11:01:39.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.526 [2024-11-15T11:01:39.387Z] =================================================================================================================== 00:17:52.526 [2024-11-15T11:01:39.387Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79850 00:17:52.526 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79850 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79898 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79898 /var/tmp/bperf.sock 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79898 ']' 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.786 11:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:52.786 [2024-11-15 11:01:39.591400] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:17:52.786 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:52.786 Zero copy mechanism will not be used. 00:17:52.786 [2024-11-15 11:01:39.592127] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79898 ] 00:17:53.045 [2024-11-15 11:01:39.734054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.045 [2024-11-15 11:01:39.790757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.982 11:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.982 11:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:17:53.982 11:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:53.982 11:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:53.982 11:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:53.982 [2024-11-15 11:01:40.817223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:54.241 11:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.241 11:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.500 nvme0n1 00:17:54.500 11:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:54.500 11:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:54.500 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:54.500 Zero copy mechanism will not be used. 00:17:54.500 Running I/O for 2 seconds... 00:17:56.373 5870.00 IOPS, 733.75 MiB/s [2024-11-15T11:01:43.234Z] 5915.00 IOPS, 739.38 MiB/s 00:17:56.373 Latency(us) 00:17:56.373 [2024-11-15T11:01:43.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.373 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:56.373 nvme0n1 : 2.00 5914.46 739.31 0.00 0.00 2700.01 1750.11 6881.28 00:17:56.373 [2024-11-15T11:01:43.234Z] =================================================================================================================== 00:17:56.373 [2024-11-15T11:01:43.234Z] Total : 5914.46 739.31 0.00 0.00 2700.01 1750.11 6881.28 00:17:56.632 { 00:17:56.632 "results": [ 00:17:56.632 { 00:17:56.632 "job": "nvme0n1", 00:17:56.632 "core_mask": "0x2", 00:17:56.632 "workload": "randwrite", 00:17:56.633 "status": "finished", 00:17:56.633 "queue_depth": 16, 00:17:56.633 "io_size": 131072, 00:17:56.633 "runtime": 2.004073, 00:17:56.633 "iops": 5914.455211960842, 00:17:56.633 "mibps": 739.3069014951052, 00:17:56.633 "io_failed": 0, 00:17:56.633 "io_timeout": 0, 00:17:56.633 "avg_latency_us": 2700.0130311466987, 00:17:56.633 "min_latency_us": 1750.1090909090908, 00:17:56.633 "max_latency_us": 6881.28 00:17:56.633 } 00:17:56.633 ], 00:17:56.633 "core_count": 1 00:17:56.633 } 00:17:56.633 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:56.633 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:56.633 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:56.633 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:56.633 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:56.633 | select(.opcode=="crc32c") 00:17:56.633 | "\(.module_name) \(.executed)"' 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79898 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79898 ']' 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79898 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79898 00:17:56.892 killing process with pid 79898 00:17:56.892 Received shutdown signal, test time was about 2.000000 seconds 00:17:56.892 00:17:56.892 Latency(us) 00:17:56.892 [2024-11-15T11:01:43.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.892 [2024-11-15T11:01:43.753Z] =================================================================================================================== 00:17:56.892 [2024-11-15T11:01:43.753Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79898' 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79898 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79898 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79707 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79707 ']' 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79707 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.892 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79707 00:17:57.151 killing process with pid 79707 00:17:57.151 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.151 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.151 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79707' 00:17:57.151 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79707 00:17:57.151 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79707 00:17:57.151 00:17:57.151 real 0m16.636s 00:17:57.151 user 0m31.369s 00:17:57.151 sys 0m4.995s 00:17:57.152 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.152 ************************************ 00:17:57.152 END TEST nvmf_digest_clean 00:17:57.152 ************************************ 00:17:57.152 11:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:57.411 ************************************ 00:17:57.411 START TEST nvmf_digest_error 00:17:57.411 ************************************ 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79983 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79983 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 79983 ']' 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.411 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.411 [2024-11-15 11:01:44.120351] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:17:57.411 [2024-11-15 11:01:44.120467] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.411 [2024-11-15 11:01:44.257720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.671 [2024-11-15 11:01:44.298672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.671 [2024-11-15 11:01:44.298775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.671 [2024-11-15 11:01:44.298787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.671 [2024-11-15 11:01:44.298794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.671 [2024-11-15 11:01:44.298802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.671 [2024-11-15 11:01:44.299237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.671 [2024-11-15 11:01:44.403681] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.671 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.671 [2024-11-15 11:01:44.480168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:57.930 null0 00:17:57.930 [2024-11-15 11:01:44.538863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.930 [2024-11-15 11:01:44.563033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80007 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80007 /var/tmp/bperf.sock 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80007 ']' 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.930 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.930 [2024-11-15 11:01:44.623875] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:17:57.930 [2024-11-15 11:01:44.623974] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80007 ] 00:17:57.930 [2024-11-15 11:01:44.769755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.195 [2024-11-15 11:01:44.824483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.195 [2024-11-15 11:01:44.878763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:58.195 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.195 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:58.195 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:58.195 11:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:58.460 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:58.460 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.460 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:58.460 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.460 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.460 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.718 nvme0n1 00:17:58.718 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:58.718 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.718 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:58.977 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.977 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:58.977 11:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:58.977 Running I/O for 2 seconds... 00:17:58.977 [2024-11-15 11:01:45.732439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:58.977 [2024-11-15 11:01:45.732502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.977 [2024-11-15 11:01:45.732533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.977 [2024-11-15 11:01:45.746745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:58.977 [2024-11-15 11:01:45.746796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.977 [2024-11-15 11:01:45.746825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.977 [2024-11-15 11:01:45.760966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:58.977 [2024-11-15 11:01:45.761016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.977 [2024-11-15 11:01:45.761043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.977 [2024-11-15 11:01:45.775139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:58.977 [2024-11-15 11:01:45.775189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.977 [2024-11-15 11:01:45.775217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.977 [2024-11-15 11:01:45.789248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:58.977 [2024-11-15 11:01:45.789298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.977 [2024-11-15 11:01:45.789326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.977 [2024-11-15 11:01:45.803472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:58.977 [2024-11-15 11:01:45.803522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.977 [2024-11-15 11:01:45.803560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.977 [2024-11-15 11:01:45.818272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:58.977 [2024-11-15 11:01:45.818323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.977 [2024-11-15 11:01:45.818335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.977 [2024-11-15 11:01:45.833875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:58.977 [2024-11-15 11:01:45.833927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.977 [2024-11-15 11:01:45.833956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:45.849216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:45.849266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:45.849293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:45.863798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:45.863874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:45.863887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:45.877879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:45.877927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:45.877955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:45.891974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:45.892027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:45.892056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:45.907314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:45.907365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:45.907393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:45.922549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:45.922610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:45.922639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:45.937731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:45.937785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:45.937814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:45.953535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:45.953583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:45.953598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:45.969173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:45.969206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:45.969234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:45.983589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:45.983623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:45.983650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:45.997578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:45.997611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:45.997638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.237 [2024-11-15 11:01:46.011458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.237 [2024-11-15 11:01:46.011491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.237 [2024-11-15 11:01:46.011518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.238 [2024-11-15 11:01:46.025433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.238 [2024-11-15 11:01:46.025466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.238 [2024-11-15 11:01:46.025494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.238 [2024-11-15 11:01:46.039494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.238 [2024-11-15 11:01:46.039552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.238 [2024-11-15 11:01:46.039565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.238 [2024-11-15 11:01:46.053365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.238 [2024-11-15 11:01:46.053398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.238 [2024-11-15 11:01:46.053425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.238 [2024-11-15 11:01:46.067294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.238 [2024-11-15 11:01:46.067328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.238 [2024-11-15 11:01:46.067355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.238 [2024-11-15 11:01:46.081136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.238 [2024-11-15 11:01:46.081169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.238 [2024-11-15 11:01:46.081197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.238 [2024-11-15 11:01:46.095346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.238 [2024-11-15 11:01:46.095397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.238 [2024-11-15 11:01:46.095425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.497 [2024-11-15 11:01:46.109667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.497 [2024-11-15 11:01:46.109700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.497 [2024-11-15 11:01:46.109727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.497 [2024-11-15 11:01:46.123630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.497 [2024-11-15 11:01:46.123662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.497 [2024-11-15 11:01:46.123689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.497 [2024-11-15 11:01:46.137481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.497 [2024-11-15 11:01:46.137514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.497 [2024-11-15 11:01:46.137549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.497 [2024-11-15 11:01:46.151376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.497 [2024-11-15 11:01:46.151409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.497 [2024-11-15 11:01:46.151437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.497 [2024-11-15 11:01:46.165254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.497 [2024-11-15 11:01:46.165297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.497 [2024-11-15 11:01:46.165325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.497 [2024-11-15 11:01:46.179287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.497 [2024-11-15 11:01:46.179338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.497 [2024-11-15 11:01:46.179366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.497 [2024-11-15 11:01:46.193956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.497 [2024-11-15 11:01:46.193989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.497 [2024-11-15 11:01:46.194016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.497 [2024-11-15 11:01:46.207919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.497 [2024-11-15 11:01:46.207970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.497 [2024-11-15 11:01:46.207998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.497 [2024-11-15 11:01:46.222001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.497 [2024-11-15 11:01:46.222034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.497 [2024-11-15 11:01:46.222062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.497 [2024-11-15 11:01:46.236027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.497 [2024-11-15 11:01:46.236077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.497 [2024-11-15 11:01:46.236105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.497 [2024-11-15 11:01:46.249979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.498 [2024-11-15 11:01:46.250012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.498 [2024-11-15 11:01:46.250040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.498 [2024-11-15 11:01:46.263899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.498 [2024-11-15 11:01:46.263949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.498 [2024-11-15 11:01:46.263977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.498 [2024-11-15 11:01:46.277671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.498 [2024-11-15 11:01:46.277704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.498 [2024-11-15 11:01:46.277731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.498 [2024-11-15 11:01:46.291458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.498 [2024-11-15 11:01:46.291506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.498 [2024-11-15 11:01:46.291534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.498 [2024-11-15 11:01:46.305316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.498 [2024-11-15 11:01:46.305348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.498 [2024-11-15 11:01:46.305375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.498 [2024-11-15 11:01:46.319549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.498 [2024-11-15 11:01:46.319590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.498 [2024-11-15 11:01:46.319617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.498 [2024-11-15 11:01:46.333451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.498 [2024-11-15 11:01:46.333483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.498 [2024-11-15 11:01:46.333511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.498 [2024-11-15 11:01:46.347789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.498 [2024-11-15 11:01:46.347857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.498 [2024-11-15 11:01:46.347885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.362593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.362643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.362671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.376616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.376648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.376676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.390404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.390452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.390480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.404361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.404394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.404422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.418226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.418259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.418286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.432205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.432236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.432263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.446836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.446886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.446913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.460809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.460840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.460867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.474598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.474648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.474676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.488722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.488772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.488800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.503715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.503767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.503796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.518854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.518921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.757 [2024-11-15 11:01:46.518949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.757 [2024-11-15 11:01:46.533652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.757 [2024-11-15 11:01:46.533703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.758 [2024-11-15 11:01:46.533732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.758 [2024-11-15 11:01:46.548096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.758 [2024-11-15 11:01:46.548164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.758 [2024-11-15 11:01:46.548192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.758 [2024-11-15 11:01:46.562138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.758 [2024-11-15 11:01:46.562188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.758 [2024-11-15 11:01:46.562216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.758 [2024-11-15 11:01:46.576390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.758 [2024-11-15 11:01:46.576440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.758 [2024-11-15 11:01:46.576468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.758 [2024-11-15 11:01:46.590431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.758 [2024-11-15 11:01:46.590482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.758 [2024-11-15 11:01:46.590511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.758 [2024-11-15 11:01:46.606578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:17:59.758 [2024-11-15 11:01:46.606658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.758 [2024-11-15 11:01:46.606672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-11-15 11:01:46.621967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.017 [2024-11-15 11:01:46.622034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-11-15 11:01:46.622062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-11-15 11:01:46.642492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.017 [2024-11-15 11:01:46.642581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-11-15 11:01:46.642610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-11-15 11:01:46.657311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.017 [2024-11-15 11:01:46.657348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-11-15 11:01:46.657377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-11-15 11:01:46.672592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.017 [2024-11-15 11:01:46.672654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-11-15 11:01:46.672682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-11-15 11:01:46.686796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.017 [2024-11-15 11:01:46.686846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-11-15 11:01:46.686873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-11-15 11:01:46.701188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.017 [2024-11-15 11:01:46.701240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-11-15 11:01:46.701267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 17585.00 IOPS, 68.69 MiB/s [2024-11-15T11:01:46.878Z] [2024-11-15 11:01:46.715224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.017 [2024-11-15 11:01:46.715273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-11-15 11:01:46.715300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-11-15 11:01:46.729582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.017 [2024-11-15 11:01:46.729630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-11-15 11:01:46.729658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-11-15 11:01:46.743600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.017 [2024-11-15 11:01:46.743648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-11-15 11:01:46.743676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-11-15 11:01:46.758287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.017 [2024-11-15 11:01:46.758336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-11-15 11:01:46.758364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-11-15 11:01:46.772364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.018 [2024-11-15 11:01:46.772413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-11-15 11:01:46.772440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-11-15 11:01:46.786402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.018 [2024-11-15 11:01:46.786451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-11-15 11:01:46.786478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-11-15 11:01:46.800922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.018 [2024-11-15 11:01:46.800987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-11-15 11:01:46.801015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-11-15 11:01:46.814839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.018 [2024-11-15 11:01:46.814873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-11-15 11:01:46.814900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-11-15 11:01:46.829126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.018 [2024-11-15 11:01:46.829161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-11-15 11:01:46.829189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-11-15 11:01:46.843909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.018 [2024-11-15 11:01:46.843963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-11-15 11:01:46.843981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-11-15 11:01:46.859230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.018 [2024-11-15 11:01:46.859264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-11-15 11:01:46.859292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-11-15 11:01:46.874313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.018 [2024-11-15 11:01:46.874347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-11-15 11:01:46.874374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.277 [2024-11-15 11:01:46.889253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.277 [2024-11-15 11:01:46.889286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.277 [2024-11-15 11:01:46.889313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.277 [2024-11-15 11:01:46.903301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.277 [2024-11-15 11:01:46.903334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.277 [2024-11-15 11:01:46.903361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.277 [2024-11-15 11:01:46.917298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.277 [2024-11-15 11:01:46.917331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.277 [2024-11-15 11:01:46.917358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.277 [2024-11-15 11:01:46.931219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.277 [2024-11-15 11:01:46.931251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.277 [2024-11-15 11:01:46.931279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.277 [2024-11-15 11:01:46.945082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.277 [2024-11-15 11:01:46.945115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.277 [2024-11-15 11:01:46.945142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.277 [2024-11-15 11:01:46.959730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.277 [2024-11-15 11:01:46.959793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.277 [2024-11-15 11:01:46.959843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.277 [2024-11-15 11:01:46.975830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.277 [2024-11-15 11:01:46.975870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.277 [2024-11-15 11:01:46.975884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.277 [2024-11-15 11:01:46.991165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.277 [2024-11-15 11:01:46.991217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.277 [2024-11-15 11:01:46.991245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.277 [2024-11-15 11:01:47.006241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.277 [2024-11-15 11:01:47.006291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.277 [2024-11-15 11:01:47.006304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.277 [2024-11-15 11:01:47.021331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.277 [2024-11-15 11:01:47.021381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.277 [2024-11-15 11:01:47.021424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.277 [2024-11-15 11:01:47.035636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.277 [2024-11-15 11:01:47.035686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.278 [2024-11-15 11:01:47.035714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.278 [2024-11-15 11:01:47.049550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.278 [2024-11-15 11:01:47.049598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.278 [2024-11-15 11:01:47.049625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.278 [2024-11-15 11:01:47.063457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.278 [2024-11-15 11:01:47.063508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.278 [2024-11-15 11:01:47.063536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.278 [2024-11-15 11:01:47.077330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.278 [2024-11-15 11:01:47.077379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.278 [2024-11-15 11:01:47.077406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.278 [2024-11-15 11:01:47.091229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.278 [2024-11-15 11:01:47.091277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.278 [2024-11-15 11:01:47.091305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.278 [2024-11-15 11:01:47.105185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.278 [2024-11-15 11:01:47.105233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.278 [2024-11-15 11:01:47.105261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.278 [2024-11-15 11:01:47.120349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.278 [2024-11-15 11:01:47.120402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.278 [2024-11-15 11:01:47.120430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.278 [2024-11-15 11:01:47.135972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.278 [2024-11-15 11:01:47.136042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.278 [2024-11-15 11:01:47.136056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.150393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.150459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.150487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.164438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.164488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.164515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.178369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.178434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.178461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.192469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.192518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.192555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.206397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.206461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.206490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.220541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.220598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.220626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.234448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.234498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.234526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.248438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.248486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.248513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.262389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.262455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.262483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.276440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.276488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.276515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.290495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.290568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.290581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.304469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.304517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.304552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.318370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.318434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.318462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.332582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.332631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.332659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.346811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.346860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.346887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.360924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.360972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.361000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.375152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.375203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.375231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.538 [2024-11-15 11:01:47.389703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.538 [2024-11-15 11:01:47.389752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.538 [2024-11-15 11:01:47.389779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.798 [2024-11-15 11:01:47.404419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.798 [2024-11-15 11:01:47.404468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.798 [2024-11-15 11:01:47.404495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.798 [2024-11-15 11:01:47.418492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.798 [2024-11-15 11:01:47.418565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.798 [2024-11-15 11:01:47.418579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.798 [2024-11-15 11:01:47.432516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.798 [2024-11-15 11:01:47.432573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.798 [2024-11-15 11:01:47.432601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.798 [2024-11-15 11:01:47.446618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.798 [2024-11-15 11:01:47.446668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.798 [2024-11-15 11:01:47.446696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.798 [2024-11-15 11:01:47.460624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.798 [2024-11-15 11:01:47.460672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.798 [2024-11-15 11:01:47.460700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.798 [2024-11-15 11:01:47.474526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.798 [2024-11-15 11:01:47.474585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.798 [2024-11-15 11:01:47.474613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.798 [2024-11-15 11:01:47.488435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.798 [2024-11-15 11:01:47.488483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.798 [2024-11-15 11:01:47.488510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.798 [2024-11-15 11:01:47.502295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.798 [2024-11-15 11:01:47.502343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.798 [2024-11-15 11:01:47.502370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.798 [2024-11-15 11:01:47.516377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.798 [2024-11-15 11:01:47.516428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.798 [2024-11-15 11:01:47.516440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.798 [2024-11-15 11:01:47.530231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.798 [2024-11-15 11:01:47.530279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.798 [2024-11-15 11:01:47.530307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.798 [2024-11-15 11:01:47.544199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.798 [2024-11-15 11:01:47.544248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.798 [2024-11-15 11:01:47.544260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.799 [2024-11-15 11:01:47.564066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.799 [2024-11-15 11:01:47.564118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.799 [2024-11-15 11:01:47.564161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.799 [2024-11-15 11:01:47.577994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.799 [2024-11-15 11:01:47.578043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.799 [2024-11-15 11:01:47.578071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.799 [2024-11-15 11:01:47.591915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.799 [2024-11-15 11:01:47.591967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.799 [2024-11-15 11:01:47.591995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.799 [2024-11-15 11:01:47.605846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.799 [2024-11-15 11:01:47.605895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.799 [2024-11-15 11:01:47.605923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.799 [2024-11-15 11:01:47.619715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.799 [2024-11-15 11:01:47.619764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.799 [2024-11-15 11:01:47.619791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.799 [2024-11-15 11:01:47.633773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.799 [2024-11-15 11:01:47.633824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.799 [2024-11-15 11:01:47.633851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.799 [2024-11-15 11:01:47.648172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:00.799 [2024-11-15 11:01:47.648224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.799 [2024-11-15 11:01:47.648252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.059 [2024-11-15 11:01:47.664188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:01.059 [2024-11-15 11:01:47.664241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.059 [2024-11-15 11:01:47.664273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.059 [2024-11-15 11:01:47.679186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:01.059 [2024-11-15 11:01:47.679236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.059 [2024-11-15 11:01:47.679264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.059 [2024-11-15 11:01:47.694442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:01.059 [2024-11-15 11:01:47.694494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.059 [2024-11-15 11:01:47.694523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.059 [2024-11-15 11:01:47.709952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x138f370) 00:18:01.059 [2024-11-15 11:01:47.710004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.059 [2024-11-15 11:01:47.710032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.059 17647.50 IOPS, 68.94 MiB/s 00:18:01.059 Latency(us) 00:18:01.059 [2024-11-15T11:01:47.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.059 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:01.059 nvme0n1 : 2.01 17629.15 68.86 0.00 0.00 7255.57 6672.76 27882.59 00:18:01.059 [2024-11-15T11:01:47.920Z] =================================================================================================================== 00:18:01.059 [2024-11-15T11:01:47.920Z] Total : 17629.15 68.86 0.00 0.00 7255.57 6672.76 27882.59 00:18:01.059 { 00:18:01.059 "results": [ 00:18:01.059 { 00:18:01.059 "job": "nvme0n1", 00:18:01.059 "core_mask": "0x2", 00:18:01.059 "workload": "randread", 00:18:01.059 "status": "finished", 00:18:01.059 "queue_depth": 128, 00:18:01.059 "io_size": 4096, 00:18:01.059 "runtime": 2.009343, 00:18:01.059 "iops": 17629.145447044135, 00:18:01.059 "mibps": 68.86384940251615, 00:18:01.059 "io_failed": 0, 00:18:01.059 "io_timeout": 0, 00:18:01.059 "avg_latency_us": 7255.573624327286, 00:18:01.059 "min_latency_us": 6672.756363636364, 00:18:01.059 "max_latency_us": 27882.589090909092 00:18:01.059 } 00:18:01.059 ], 00:18:01.059 "core_count": 1 00:18:01.059 } 00:18:01.060 11:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:01.060 11:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:01.060 11:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:01.060 | .driver_specific 00:18:01.060 | .nvme_error 00:18:01.060 | .status_code 00:18:01.060 | .command_transient_transport_error' 00:18:01.060 11:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80007 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80007 ']' 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80007 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80007 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:01.319 killing process with pid 80007 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80007' 00:18:01.319 Received shutdown signal, test time was about 2.000000 seconds 00:18:01.319 00:18:01.319 Latency(us) 00:18:01.319 [2024-11-15T11:01:48.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.319 [2024-11-15T11:01:48.180Z] =================================================================================================================== 00:18:01.319 [2024-11-15T11:01:48.180Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80007 00:18:01.319 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80007 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80060 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80060 /var/tmp/bperf.sock 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80060 ']' 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.578 11:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:01.578 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:01.578 Zero copy mechanism will not be used. 00:18:01.578 [2024-11-15 11:01:48.294582] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:18:01.578 [2024-11-15 11:01:48.294696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80060 ] 00:18:01.578 [2024-11-15 11:01:48.434451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.837 [2024-11-15 11:01:48.484938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.837 [2024-11-15 11:01:48.536780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:02.405 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.405 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:02.405 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:02.405 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:02.973 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:02.973 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.973 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:02.973 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.973 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.973 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.973 nvme0n1 00:18:02.973 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:02.973 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.973 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.233 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.233 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:03.233 11:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:03.233 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:03.233 Zero copy mechanism will not be used. 00:18:03.233 Running I/O for 2 seconds... 00:18:03.233 [2024-11-15 11:01:49.941702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.941782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.941832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.946394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.946477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.946506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.950933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.950983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.951012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.955263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.955314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.955342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.959679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.959730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.959758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.963976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.964013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.964041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.968378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.968427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.968455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.972787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.972825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.972854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.977202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.977237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.977265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.981552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.981585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.981613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.985910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.985944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.985973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.990356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.990390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.990418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.994747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.994780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.994808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:49.999176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:49.999210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:49.999238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:50.003615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:50.003650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:50.003679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:50.007919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:50.007957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:50.007986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:50.012339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:50.012374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:50.012402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:50.016832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:50.016869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:50.016912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:50.021372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:50.021409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:50.021438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:50.026060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:50.026098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.233 [2024-11-15 11:01:50.026127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.233 [2024-11-15 11:01:50.030811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.233 [2024-11-15 11:01:50.030853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.030868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.035466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.035502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.035515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.040294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.040366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.040379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.045044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.045080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.045107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.049714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.049754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.049769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.054313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.054350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.054363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.059207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.059259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.059287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.063948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.063997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.064011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.068721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.068758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.068786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.073206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.073242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.073270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.077734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.077771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.077784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.082179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.082214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.082243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.086636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.086671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.086698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.234 [2024-11-15 11:01:50.091296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.234 [2024-11-15 11:01:50.091356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.234 [2024-11-15 11:01:50.091384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.095973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.096014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.495 [2024-11-15 11:01:50.096028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.100704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.100742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.495 [2024-11-15 11:01:50.100755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.105123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.105159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.495 [2024-11-15 11:01:50.105187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.109569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.109604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.495 [2024-11-15 11:01:50.109631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.113941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.113976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.495 [2024-11-15 11:01:50.114003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.118370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.118407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.495 [2024-11-15 11:01:50.118419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.122987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.123023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.495 [2024-11-15 11:01:50.123036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.127371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.127406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.495 [2024-11-15 11:01:50.127434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.131836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.131875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.495 [2024-11-15 11:01:50.131888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.136446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.136481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.495 [2024-11-15 11:01:50.136509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.140826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.140862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.495 [2024-11-15 11:01:50.140891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.495 [2024-11-15 11:01:50.145216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.495 [2024-11-15 11:01:50.145251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.145279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.149639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.149676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.149689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.153974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.154010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.154038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.158548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.158608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.158621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.162921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.162957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.162985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.167622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.167670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.167700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.172376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.172425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.172454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.177031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.177067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.177095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.181707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.181761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.181805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.186570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.186621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.186635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.191583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.191632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.191662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.196422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.196470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.196499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.201121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.201159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.201171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.205619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.205654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.205682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.210148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.210183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.210211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.214924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.214961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.214990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.219718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.219755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.219785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.224623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.224662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.224676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.229326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.229363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.229392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.233995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.234032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.234060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.238637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.238675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.238718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.243268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.243305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.243333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.248010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.248050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.248063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.252613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.252662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.252691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.257307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.257343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.257371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.261788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.261840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.261867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.496 [2024-11-15 11:01:50.266371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.496 [2024-11-15 11:01:50.266410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.496 [2024-11-15 11:01:50.266440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.271159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.271196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.271224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.276079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.276120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.276147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.281063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.281103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.281132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.285686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.285724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.285762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.290332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.290367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.290396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.295041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.295077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.295105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.299657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.299693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.299723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.304387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.304434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.304463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.309235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.309271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.309299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.313929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.313964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.313992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.318607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.318643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.318671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.323287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.323322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.323349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.327949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.327986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.328015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.332627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.332679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.332716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.337252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.337290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.337303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.341722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.341759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.341787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.346281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.346317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.346345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.497 [2024-11-15 11:01:50.351083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.497 [2024-11-15 11:01:50.351121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.497 [2024-11-15 11:01:50.351150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.355898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.355939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.355954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.360731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.360784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.360812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.365370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.365405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.365433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.369840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.369873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.369901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.374370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.374405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.374432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.378823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.378857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.378885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.383252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.383289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.383316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.387759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.387796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.387871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.392361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.392395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.392422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.396830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.396865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.396892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.401235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.401268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.401296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.405595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.405628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.405656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.409953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.409987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.410015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.414351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.414384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.414412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.418794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.418828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.418855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.423176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.423210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.423237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.427557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.759 [2024-11-15 11:01:50.427592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.759 [2024-11-15 11:01:50.427621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.759 [2024-11-15 11:01:50.432004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.432041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.432069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.436407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.436440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.436467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.440818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.440852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.440880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.445256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.445294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.445322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.449774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.449812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.449840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.454281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.454319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.454347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.458705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.458739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.458767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.463138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.463171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.463199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.467613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.467664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.467691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.472116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.472183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.472211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.476631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.476665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.476693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.480994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.481044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.481072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.485271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.485321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.485349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.489835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.489886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.489914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.494739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.494791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.494820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.499896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.499938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.499952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.504779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.504834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.504847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.509708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.509776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.509804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.514234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.514283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.514311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.519104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.519141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.519153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.524492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.524553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.524596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.528998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.529047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.529059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.533892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.533944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.533971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.539022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.539074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.539088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.544171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.544221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.544234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.549099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.549167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.549195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.553969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.554010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.760 [2024-11-15 11:01:50.554023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.760 [2024-11-15 11:01:50.558722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.760 [2024-11-15 11:01:50.558761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.558775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.563350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.563384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.563411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.568082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.568122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.568136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.573024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.573057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.573085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.577440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.577491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.577519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.582066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.582115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.582142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.586648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.586700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.586713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.591144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.591193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.591221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.595976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.596014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.596028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.600545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.600605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.600633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.605077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.605126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.605154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.609605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.609655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.609683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:03.761 [2024-11-15 11:01:50.614182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:03.761 [2024-11-15 11:01:50.614248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.761 [2024-11-15 11:01:50.614280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.618982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.619054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.619087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.623853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.623894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.623910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.628482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.628558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.628573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.632951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.632984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.633012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.637555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.637618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.637647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.642042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.642076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.642103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.646410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.646444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.646471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.650781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.650815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.650842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.655147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.655180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.655207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.659469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.659518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.659557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.663922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.663957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.663985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.668310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.668342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.668369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.672657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.672689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.672716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.676996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.677029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.677056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.681285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.681317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.681344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.022 [2024-11-15 11:01:50.685691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.022 [2024-11-15 11:01:50.685739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.022 [2024-11-15 11:01:50.685766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.690038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.690070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.690098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.694482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.694555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.694568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.698856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.698889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.698915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.703219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.703252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.703278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.707968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.708005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.708032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.712780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.712830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.712858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.717270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.717304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.717331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.721594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.721643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.721670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.726007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.726039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.726067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.730438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.730486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.730514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.734774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.734807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.734833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.739096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.739129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.739157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.743536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.743568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.743595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.748001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.748036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.748064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.752403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.752435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.752462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.756800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.756832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.756860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.761211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.761245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.761272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.765573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.765621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.765649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.769926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.769959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.774335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.774368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.774395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.778705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.778753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.778781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.783033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.783066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.783093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.787408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.787441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.787468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.791664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.791696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.791724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.795961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.795997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.796025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.800410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.800443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.800470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.023 [2024-11-15 11:01:50.804703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.023 [2024-11-15 11:01:50.804735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.023 [2024-11-15 11:01:50.804762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.809012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.809044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.809072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.813244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.813277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.813303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.817452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.817485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.817512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.821745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.821793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.821820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.826129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.826162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.826190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.830481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.830513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.830551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.834795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.834827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.834855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.839181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.839214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.839240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.843451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.843500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.843527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.847955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.847990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.848018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.852391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.852451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.852478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.856863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.856911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.856938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.861253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.861302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.861330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.866147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.866198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.866226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.870931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.870983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.871011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.024 [2024-11-15 11:01:50.875949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.024 [2024-11-15 11:01:50.876003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.024 [2024-11-15 11:01:50.876017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.880971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.881011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.881025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.885877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.885941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.885968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.890859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.890909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.890937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.895514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.895579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.895594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.900260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.900293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.900321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.904925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.904978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.905022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.909662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.909745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.909774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.914265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.914298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.914326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.918999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.919051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.919080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.923844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.923884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.923897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.928844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.928877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.928905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.933370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.933435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.933464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.285 6758.00 IOPS, 844.75 MiB/s [2024-11-15T11:01:51.146Z] [2024-11-15 11:01:50.939974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.940014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.940028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.944817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.285 [2024-11-15 11:01:50.944870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.285 [2024-11-15 11:01:50.944899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.285 [2024-11-15 11:01:50.949843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:50.949910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:50.949938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:50.954997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:50.955065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:50.955110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:50.960172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:50.960207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:50.960235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:50.965154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:50.965206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:50.965234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:50.970207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:50.970258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:50.970287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:50.975096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:50.975146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:50.975174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:50.979767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:50.979844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:50.979858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:50.984575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:50.984617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:50.984632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:50.989318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:50.989370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:50.989397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:50.994248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:50.994301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:50.994331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:50.998849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:50.998888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:50.998902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.003431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.003500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.003531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.008260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.008309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.008337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.013222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.013262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.013276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.018039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.018078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.018092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.023023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.023074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.023103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.028003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.028042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.028056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.032602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.032662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.032691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.037532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.037580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.037595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.042457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.042510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.042564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.047337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.047389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.047434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.052222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.052291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.052305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.057075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.057129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.057158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.061940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.062042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.062086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.066942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.066979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.067007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.071584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.071633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.071647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.076443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.286 [2024-11-15 11:01:51.076495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.286 [2024-11-15 11:01:51.076523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.286 [2024-11-15 11:01:51.081355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.081405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.081435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.086080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.086128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.086156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.090672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.090739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.090768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.095285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.095334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.095378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.100066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.100146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.100174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.104842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.104878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.104908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.109669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.109709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.109723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.114430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.114486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.114500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.119677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.119715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.119729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.124539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.124591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.124605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.129726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.129811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.129844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.134557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.134608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.134622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.287 [2024-11-15 11:01:51.139297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.287 [2024-11-15 11:01:51.139346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.287 [2024-11-15 11:01:51.139373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.547 [2024-11-15 11:01:51.144562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.144613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.144627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.149672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.149722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.149737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.154963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.155032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.155062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.159954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.159994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.160008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.164810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.164859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.164887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.169407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.169493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.169522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.174168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.174218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.174246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.178916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.178966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.178993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.183494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.183586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.183601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.188401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.188476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.188490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.193281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.193332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.193360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.198130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.198180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.198208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.203059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.203110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.203138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.207950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.207991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.208006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.212734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.212774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.212787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.217579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.217632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.217646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.222289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.222355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.222384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.227462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.227505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.227520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.232443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.232485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.232499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.237214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.237266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.237295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.242030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.242081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.242110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.246853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.246904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.246933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.251569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.251618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.251632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.256360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.256409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.256438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.261181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.261229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.261258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.266002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.548 [2024-11-15 11:01:51.266040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.548 [2024-11-15 11:01:51.266083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.548 [2024-11-15 11:01:51.270588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.270643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.270671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.275054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.275102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.275130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.279495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.279585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.279614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.284294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.284342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.284369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.288852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.288901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.288929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.293269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.293317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.293345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.297669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.297704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.297731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.302227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.302275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.302303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.306689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.306738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.306766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.311094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.311143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.311170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.315965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.316004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.316018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.320660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.320712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.320726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.325384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.325452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.325465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.330071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.330122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.330151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.334925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.334975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.335003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.339579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.339627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.339641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.344207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.344242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.344270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.348949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.349000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.349029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.353726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.353763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.353777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.358328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.358380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.358409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.363086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.363136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.363166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.367897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.367948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.367962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.372597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.372635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.372649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.377300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.377350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.377378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.381984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.382036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.382051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.386866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.386931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.386959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.391571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.549 [2024-11-15 11:01:51.391617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.549 [2024-11-15 11:01:51.391631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.549 [2024-11-15 11:01:51.396426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.550 [2024-11-15 11:01:51.396463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.550 [2024-11-15 11:01:51.396477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.550 [2024-11-15 11:01:51.401115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.550 [2024-11-15 11:01:51.401176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.550 [2024-11-15 11:01:51.401204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.550 [2024-11-15 11:01:51.406170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.550 [2024-11-15 11:01:51.406222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.550 [2024-11-15 11:01:51.406252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.809 [2024-11-15 11:01:51.410952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.809 [2024-11-15 11:01:51.411003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.809 [2024-11-15 11:01:51.411031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.809 [2024-11-15 11:01:51.416002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.809 [2024-11-15 11:01:51.416043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.809 [2024-11-15 11:01:51.416057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.809 [2024-11-15 11:01:51.420797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.420848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.420877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.425592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.425654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.425668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.430319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.430370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.430399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.435117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.435168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.435197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.439907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.439947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.439961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.444598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.444634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.444648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.449260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.449309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.449338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.454075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.454127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.454156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.458915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.458996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.459026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.463608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.463659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.463672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.468263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.468313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.468341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.473095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.473145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.473173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.477982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.478033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.478062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.482883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.482948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.482977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.487770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.487821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.487837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.492547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.492599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.492614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.497302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.497351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.497379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.501980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.502033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.502062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.506837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.506888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.506918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.511670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.511707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.511721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.516370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.516424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.516437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.520988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.521026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.521040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.525466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.525503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.525517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.529923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.529962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.529976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.534347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.534384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.534397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.538879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.538917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.538930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.543458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.543497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.543511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.547949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.547987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.548000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.552392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.552428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.552442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.556951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.556988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.557002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.561418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.561455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.561469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.565931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.565968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.565982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.570403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.570440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.570453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.574876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.810 [2024-11-15 11:01:51.574915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.810 [2024-11-15 11:01:51.574929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.810 [2024-11-15 11:01:51.579296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.579334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.579347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.583846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.583884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.583897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.588326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.588362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.588375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.592737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.592773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.592786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.597228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.597266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.597279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.601787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.601824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.601837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.606256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.606293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.606306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.610952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.611005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.611019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.615489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.615538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.615553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.619983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.620021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.620035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.624737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.624775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.624789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.629370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.629408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.629422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.633929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.633968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.633981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.638555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.638602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.638615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.642984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.643021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.643034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.647420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.647458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.647472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.651903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.651941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.651954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.656372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.656408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.656422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.660887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.660926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.660940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.811 [2024-11-15 11:01:51.665412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:04.811 [2024-11-15 11:01:51.665465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.811 [2024-11-15 11:01:51.665487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.670125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.670167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.670181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.674760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.674800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.674814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.679343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.679384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.679398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.683926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.683965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.683978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.688492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.688541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.688557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.692917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.692954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.692967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.697500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.697552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.697566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.701950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.701987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.702001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.706481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.706541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.706555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.710968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.711005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.711019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.715563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.715599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.715612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.720121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.720159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.720173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.724656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.724694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.724708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.729212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.729250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.071 [2024-11-15 11:01:51.729265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.071 [2024-11-15 11:01:51.733775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.071 [2024-11-15 11:01:51.733812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.733826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.738279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.738316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.738330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.742883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.742922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.742936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.747775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.747837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.747852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.752544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.752584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.752599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.757098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.757137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.757151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.761718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.761766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.761794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.766413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.766451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.766465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.770989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.771040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.771069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.775577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.775637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.775665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.780357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.780403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.780431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.785172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.785221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.785249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.790060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.790108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.790136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.794695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.794761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.794788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.799213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.799261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.799289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.803962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.804000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.804014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.808604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.808665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.808694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.813349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.813397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.813441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.818232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.818282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.818310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.823008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.823056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.823084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.827660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.827697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.827711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.832338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.832386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.832430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.837172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.837237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.837265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.841960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.842008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.842036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.846553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.846598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.846612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.851223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.851271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.851299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.856073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.856127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.072 [2024-11-15 11:01:51.856156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.072 [2024-11-15 11:01:51.861523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.072 [2024-11-15 11:01:51.861572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.861587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.866228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.866277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.866305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.871004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.871053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.871080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.875614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.875664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.875693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.880338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.880387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.880415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.885054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.885102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.885130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.889827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.889875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.889902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.894529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.894590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.894619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.899245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.899293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.899321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.904018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.904055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.904084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.908707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.908782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.908810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.913352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.913400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.913445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.918015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.918063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.918091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.922630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.922680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.922709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:05.073 [2024-11-15 11:01:51.927366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.073 [2024-11-15 11:01:51.927434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.073 [2024-11-15 11:01:51.927463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:05.332 [2024-11-15 11:01:51.932500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.332 [2024-11-15 11:01:51.932562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.332 [2024-11-15 11:01:51.932592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:05.332 [2024-11-15 11:01:51.938663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x821400) 00:18:05.332 [2024-11-15 11:01:51.938714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.332 [2024-11-15 11:01:51.938743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.332 6649.50 IOPS, 831.19 MiB/s 00:18:05.332 Latency(us) 00:18:05.332 [2024-11-15T11:01:52.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.332 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:05.332 nvme0n1 : 2.00 6651.38 831.42 0.00 0.00 2402.07 2025.66 7268.54 00:18:05.332 [2024-11-15T11:01:52.193Z] =================================================================================================================== 00:18:05.332 [2024-11-15T11:01:52.193Z] Total : 6651.38 831.42 0.00 0.00 2402.07 2025.66 7268.54 00:18:05.332 { 00:18:05.332 "results": [ 00:18:05.332 { 00:18:05.332 "job": "nvme0n1", 00:18:05.332 "core_mask": "0x2", 00:18:05.332 "workload": "randread", 00:18:05.332 "status": "finished", 00:18:05.332 "queue_depth": 16, 00:18:05.332 "io_size": 131072, 00:18:05.332 "runtime": 2.004094, 00:18:05.332 "iops": 6651.384615691679, 00:18:05.332 "mibps": 831.4230769614599, 00:18:05.332 "io_failed": 0, 00:18:05.332 "io_timeout": 0, 00:18:05.332 "avg_latency_us": 2402.071726658937, 00:18:05.332 "min_latency_us": 2025.658181818182, 00:18:05.332 "max_latency_us": 7268.538181818182 00:18:05.332 } 00:18:05.332 ], 00:18:05.332 "core_count": 1 00:18:05.332 } 00:18:05.332 11:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:05.332 11:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:05.332 11:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:05.332 | .driver_specific 00:18:05.332 | .nvme_error 00:18:05.332 | .status_code 00:18:05.332 | .command_transient_transport_error' 00:18:05.332 11:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 430 > 0 )) 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80060 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80060 ']' 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80060 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80060 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80060' 00:18:05.592 killing process with pid 80060 00:18:05.592 Received shutdown signal, test time was about 2.000000 seconds 00:18:05.592 00:18:05.592 Latency(us) 00:18:05.592 [2024-11-15T11:01:52.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.592 [2024-11-15T11:01:52.453Z] =================================================================================================================== 00:18:05.592 [2024-11-15T11:01:52.453Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80060 00:18:05.592 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80060 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80119 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80119 /var/tmp/bperf.sock 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80119 ']' 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.851 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:05.851 [2024-11-15 11:01:52.569168] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:18:05.851 [2024-11-15 11:01:52.569279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80119 ] 00:18:05.851 [2024-11-15 11:01:52.711457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.110 [2024-11-15 11:01:52.768968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.110 [2024-11-15 11:01:52.822904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:06.110 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.110 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:06.110 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:06.110 11:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:06.369 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:06.369 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.369 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.369 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.369 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:06.369 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:06.936 nvme0n1 00:18:06.936 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:06.936 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.936 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.936 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.936 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:06.936 11:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:06.936 Running I/O for 2 seconds... 00:18:06.936 [2024-11-15 11:01:53.698612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f7100 00:18:06.936 [2024-11-15 11:01:53.700009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.936 [2024-11-15 11:01:53.700057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:06.936 [2024-11-15 11:01:53.713006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f7970 00:18:06.936 [2024-11-15 11:01:53.714251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.936 [2024-11-15 11:01:53.714299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:06.936 [2024-11-15 11:01:53.727347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f81e0 00:18:06.936 [2024-11-15 11:01:53.728661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.936 [2024-11-15 11:01:53.728711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:06.936 [2024-11-15 11:01:53.741482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f8a50 00:18:06.936 [2024-11-15 11:01:53.742776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.936 [2024-11-15 11:01:53.742854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:06.936 [2024-11-15 11:01:53.756074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f92c0 00:18:06.936 [2024-11-15 11:01:53.757476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.936 [2024-11-15 11:01:53.757511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:06.936 [2024-11-15 11:01:53.770780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f9b30 00:18:06.936 [2024-11-15 11:01:53.772061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.936 [2024-11-15 11:01:53.772151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:06.936 [2024-11-15 11:01:53.785193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fa3a0 00:18:06.936 [2024-11-15 11:01:53.786430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:06.936 [2024-11-15 11:01:53.786477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.799401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fac10 00:18:07.195 [2024-11-15 11:01:53.800686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.800743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.813400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fb480 00:18:07.195 [2024-11-15 11:01:53.814580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.814625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.828117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fbcf0 00:18:07.195 [2024-11-15 11:01:53.829351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.829401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.842140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fc560 00:18:07.195 [2024-11-15 11:01:53.843318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.843370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.855790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fcdd0 00:18:07.195 [2024-11-15 11:01:53.857189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.857218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.869720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fd640 00:18:07.195 [2024-11-15 11:01:53.870815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.870847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.884631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fdeb0 00:18:07.195 [2024-11-15 11:01:53.885770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.885848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.898917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fe720 00:18:07.195 [2024-11-15 11:01:53.900160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.900190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.913470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ff3c8 00:18:07.195 [2024-11-15 11:01:53.914493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.914581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.932742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ff3c8 00:18:07.195 [2024-11-15 11:01:53.934713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.934745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.945887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fe720 00:18:07.195 [2024-11-15 11:01:53.948048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.948084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.959955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fdeb0 00:18:07.195 [2024-11-15 11:01:53.961907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.961940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.974189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fd640 00:18:07.195 [2024-11-15 11:01:53.976174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.976208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:53.988206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fcdd0 00:18:07.195 [2024-11-15 11:01:53.990144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:53.990175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:54.002084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fc560 00:18:07.195 [2024-11-15 11:01:54.004052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:54.004117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:54.016507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fbcf0 00:18:07.195 [2024-11-15 11:01:54.018396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:54.018447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:54.030817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fb480 00:18:07.195 [2024-11-15 11:01:54.032745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:54.032793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:07.195 [2024-11-15 11:01:54.044730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fac10 00:18:07.195 [2024-11-15 11:01:54.046741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.195 [2024-11-15 11:01:54.046774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.058972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fa3a0 00:18:07.455 [2024-11-15 11:01:54.060867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.060917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.072600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f9b30 00:18:07.455 [2024-11-15 11:01:54.074404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.074435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.085825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f92c0 00:18:07.455 [2024-11-15 11:01:54.087633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.087666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.099122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f8a50 00:18:07.455 [2024-11-15 11:01:54.100941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.100990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.113602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f81e0 00:18:07.455 [2024-11-15 11:01:54.115367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.115400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.127959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f7970 00:18:07.455 [2024-11-15 11:01:54.129931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.129987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.142821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f7100 00:18:07.455 [2024-11-15 11:01:54.144864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.144930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.157455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f6890 00:18:07.455 [2024-11-15 11:01:54.159233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.159265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.171512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f6020 00:18:07.455 [2024-11-15 11:01:54.173244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.173275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.185600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f57b0 00:18:07.455 [2024-11-15 11:01:54.187473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.187506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.199797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f4f40 00:18:07.455 [2024-11-15 11:01:54.201642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.201702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.214350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f46d0 00:18:07.455 [2024-11-15 11:01:54.216179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.216397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.229475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f3e60 00:18:07.455 [2024-11-15 11:01:54.231440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.231474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.244714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f35f0 00:18:07.455 [2024-11-15 11:01:54.246592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.246635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.259342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f2d80 00:18:07.455 [2024-11-15 11:01:54.261175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.261211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.273672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f2510 00:18:07.455 [2024-11-15 11:01:54.275265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.275296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.287397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f1ca0 00:18:07.455 [2024-11-15 11:01:54.289133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.289287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:07.455 [2024-11-15 11:01:54.302310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f1430 00:18:07.455 [2024-11-15 11:01:54.304038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.455 [2024-11-15 11:01:54.304190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.317209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f0bc0 00:18:07.715 [2024-11-15 11:01:54.318900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.319092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.332070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f0350 00:18:07.715 [2024-11-15 11:01:54.333678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.333722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.345684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166efae0 00:18:07.715 [2024-11-15 11:01:54.347205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.347236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.359293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ef270 00:18:07.715 [2024-11-15 11:01:54.361118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.361160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.374208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166eea00 00:18:07.715 [2024-11-15 11:01:54.376071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.376103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.389167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ee190 00:18:07.715 [2024-11-15 11:01:54.390958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.390996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.403277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ed920 00:18:07.715 [2024-11-15 11:01:54.404995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.405056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.417554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ed0b0 00:18:07.715 [2024-11-15 11:01:54.419061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.419126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.432902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ec840 00:18:07.715 [2024-11-15 11:01:54.434386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.434450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.447892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ebfd0 00:18:07.715 [2024-11-15 11:01:54.449485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.449520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.463138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166eb760 00:18:07.715 [2024-11-15 11:01:54.464969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.464997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.477704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166eaef0 00:18:07.715 [2024-11-15 11:01:54.479295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.479329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.492140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ea680 00:18:07.715 [2024-11-15 11:01:54.493607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.493654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.506880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e9e10 00:18:07.715 [2024-11-15 11:01:54.508291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.508325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.521014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e95a0 00:18:07.715 [2024-11-15 11:01:54.522567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.522622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.535186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e8d30 00:18:07.715 [2024-11-15 11:01:54.536621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.536682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.549464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e84c0 00:18:07.715 [2024-11-15 11:01:54.550915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.715 [2024-11-15 11:01:54.550963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:07.715 [2024-11-15 11:01:54.564012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e7c50 00:18:07.715 [2024-11-15 11:01:54.565665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.716 [2024-11-15 11:01:54.565693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.578363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e73e0 00:18:07.975 [2024-11-15 11:01:54.579708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.579745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.591950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e6b70 00:18:07.975 [2024-11-15 11:01:54.593219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.593254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.605326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e6300 00:18:07.975 [2024-11-15 11:01:54.606613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.606674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.618804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e5a90 00:18:07.975 [2024-11-15 11:01:54.620054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.620089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.632184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e5220 00:18:07.975 [2024-11-15 11:01:54.633408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.633440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.646343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e49b0 00:18:07.975 [2024-11-15 11:01:54.647660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.647696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.661085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e4140 00:18:07.975 [2024-11-15 11:01:54.662350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.662385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.675508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e38d0 00:18:07.975 [2024-11-15 11:01:54.676867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.676916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:07.975 17585.00 IOPS, 68.69 MiB/s [2024-11-15T11:01:54.836Z] [2024-11-15 11:01:54.689979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e3060 00:18:07.975 [2024-11-15 11:01:54.691191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.691386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.704352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e27f0 00:18:07.975 [2024-11-15 11:01:54.705749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.705801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.718778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e1f80 00:18:07.975 [2024-11-15 11:01:54.719986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.720023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.733205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e1710 00:18:07.975 [2024-11-15 11:01:54.734449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.734484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.747579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e0ea0 00:18:07.975 [2024-11-15 11:01:54.748876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.748906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.762165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e0630 00:18:07.975 [2024-11-15 11:01:54.763384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.763451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.777251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166dfdc0 00:18:07.975 [2024-11-15 11:01:54.778458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.975 [2024-11-15 11:01:54.778495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:07.975 [2024-11-15 11:01:54.791457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166df550 00:18:07.976 [2024-11-15 11:01:54.792952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.976 [2024-11-15 11:01:54.792981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:07.976 [2024-11-15 11:01:54.806822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166dece0 00:18:07.976 [2024-11-15 11:01:54.808200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.976 [2024-11-15 11:01:54.808249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:07.976 [2024-11-15 11:01:54.821938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166de470 00:18:07.976 [2024-11-15 11:01:54.823191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.976 [2024-11-15 11:01:54.823240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:54.843950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ddc00 00:18:08.234 [2024-11-15 11:01:54.846096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:54.846135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:54.858594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166de470 00:18:08.234 [2024-11-15 11:01:54.860716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:54.860875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:54.873363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166dece0 00:18:08.234 [2024-11-15 11:01:54.875647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:54.875796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:54.888222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166df550 00:18:08.234 [2024-11-15 11:01:54.890255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:54.890291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:54.902741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166dfdc0 00:18:08.234 [2024-11-15 11:01:54.904833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:54.904870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:54.917424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e0630 00:18:08.234 [2024-11-15 11:01:54.919651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:54.919687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:54.932227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e0ea0 00:18:08.234 [2024-11-15 11:01:54.934240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:54.934392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:54.946986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e1710 00:18:08.234 [2024-11-15 11:01:54.949020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:54.949170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:54.961744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e1f80 00:18:08.234 [2024-11-15 11:01:54.963709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:54.963747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:54.976463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e27f0 00:18:08.234 [2024-11-15 11:01:54.978393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:54.978436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:54.991770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e3060 00:18:08.234 [2024-11-15 11:01:54.994118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:54.994153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:08.234 [2024-11-15 11:01:55.007678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e38d0 00:18:08.234 [2024-11-15 11:01:55.009848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.234 [2024-11-15 11:01:55.009887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:08.235 [2024-11-15 11:01:55.023530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e4140 00:18:08.235 [2024-11-15 11:01:55.025762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.235 [2024-11-15 11:01:55.025801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:08.235 [2024-11-15 11:01:55.039397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e49b0 00:18:08.235 [2024-11-15 11:01:55.041668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.235 [2024-11-15 11:01:55.041823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:08.235 [2024-11-15 11:01:55.055548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e5220 00:18:08.235 [2024-11-15 11:01:55.057669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.235 [2024-11-15 11:01:55.057715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:08.235 [2024-11-15 11:01:55.071224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e5a90 00:18:08.235 [2024-11-15 11:01:55.073272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.235 [2024-11-15 11:01:55.073307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:08.235 [2024-11-15 11:01:55.086709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e6300 00:18:08.235 [2024-11-15 11:01:55.088830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.235 [2024-11-15 11:01:55.088867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.102362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e6b70 00:18:08.494 [2024-11-15 11:01:55.104269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.104304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.117475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e73e0 00:18:08.494 [2024-11-15 11:01:55.119278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.119313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.132426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e7c50 00:18:08.494 [2024-11-15 11:01:55.134409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.134463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.148037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e84c0 00:18:08.494 [2024-11-15 11:01:55.149807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.149843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.162837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e8d30 00:18:08.494 [2024-11-15 11:01:55.164639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.164675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.177585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e95a0 00:18:08.494 [2024-11-15 11:01:55.179314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.179347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.191239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166e9e10 00:18:08.494 [2024-11-15 11:01:55.192954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.192986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.204518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ea680 00:18:08.494 [2024-11-15 11:01:55.206141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.206172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.218540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166eaef0 00:18:08.494 [2024-11-15 11:01:55.220204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.220386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.232517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166eb760 00:18:08.494 [2024-11-15 11:01:55.234105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.234137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.245961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ebfd0 00:18:08.494 [2024-11-15 11:01:55.247741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.247798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.259388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ec840 00:18:08.494 [2024-11-15 11:01:55.260966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.260997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.272484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ed0b0 00:18:08.494 [2024-11-15 11:01:55.274044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.274074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.286134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ed920 00:18:08.494 [2024-11-15 11:01:55.287679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.287713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.299349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ee190 00:18:08.494 [2024-11-15 11:01:55.300999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.301030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.312595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166eea00 00:18:08.494 [2024-11-15 11:01:55.314082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.314113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.326208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166ef270 00:18:08.494 [2024-11-15 11:01:55.327746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.327780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.339497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166efae0 00:18:08.494 [2024-11-15 11:01:55.341186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.494 [2024-11-15 11:01:55.341213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:08.494 [2024-11-15 11:01:55.353365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f0350 00:18:08.754 [2024-11-15 11:01:55.354835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.354873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.367292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f0bc0 00:18:08.754 [2024-11-15 11:01:55.368844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.368878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.380757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f1430 00:18:08.754 [2024-11-15 11:01:55.382172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.382211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.394133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f1ca0 00:18:08.754 [2024-11-15 11:01:55.395740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.395780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.407883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f2510 00:18:08.754 [2024-11-15 11:01:55.409497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.409551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.421541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f2d80 00:18:08.754 [2024-11-15 11:01:55.423051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.423085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.435307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f35f0 00:18:08.754 [2024-11-15 11:01:55.437039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.437075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.449521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f3e60 00:18:08.754 [2024-11-15 11:01:55.450883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.450922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.463172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f46d0 00:18:08.754 [2024-11-15 11:01:55.464506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.464723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.476760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f4f40 00:18:08.754 [2024-11-15 11:01:55.478229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.478268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.490228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f57b0 00:18:08.754 [2024-11-15 11:01:55.491528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.491618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.503594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f6020 00:18:08.754 [2024-11-15 11:01:55.505090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.505124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.517006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f6890 00:18:08.754 [2024-11-15 11:01:55.518268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.518300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.530371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f7100 00:18:08.754 [2024-11-15 11:01:55.531669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.531704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.543971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f7970 00:18:08.754 [2024-11-15 11:01:55.545206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.545240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.557288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f81e0 00:18:08.754 [2024-11-15 11:01:55.558535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.558607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.570548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f8a50 00:18:08.754 [2024-11-15 11:01:55.571747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.571957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.584084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f92c0 00:18:08.754 [2024-11-15 11:01:55.585294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.585338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.597499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166f9b30 00:18:08.754 [2024-11-15 11:01:55.598690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.598735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:08.754 [2024-11-15 11:01:55.610919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fa3a0 00:18:08.754 [2024-11-15 11:01:55.612127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.754 [2024-11-15 11:01:55.612188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:09.013 [2024-11-15 11:01:55.624790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fac10 00:18:09.013 [2024-11-15 11:01:55.625936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.013 [2024-11-15 11:01:55.625981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:09.013 [2024-11-15 11:01:55.638086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fb480 00:18:09.013 [2024-11-15 11:01:55.639234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.013 [2024-11-15 11:01:55.639256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:09.013 [2024-11-15 11:01:55.651432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fbcf0 00:18:09.013 [2024-11-15 11:01:55.652584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.013 [2024-11-15 11:01:55.652636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:09.013 [2024-11-15 11:01:55.664636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fc560 00:18:09.013 [2024-11-15 11:01:55.665738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.013 [2024-11-15 11:01:55.665781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:09.013 [2024-11-15 11:01:55.677749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4750) with pdu=0x2000166fcdd0 00:18:09.013 [2024-11-15 11:01:55.678821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.013 [2024-11-15 11:01:55.678864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:09.013 17711.00 IOPS, 69.18 MiB/s 00:18:09.013 Latency(us) 00:18:09.013 [2024-11-15T11:01:55.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.013 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.013 nvme0n1 : 2.01 17724.77 69.24 0.00 0.00 7208.41 3842.79 28359.21 00:18:09.013 [2024-11-15T11:01:55.874Z] =================================================================================================================== 00:18:09.013 [2024-11-15T11:01:55.874Z] Total : 17724.77 69.24 0.00 0.00 7208.41 3842.79 28359.21 00:18:09.013 { 00:18:09.013 "results": [ 00:18:09.013 { 00:18:09.013 "job": "nvme0n1", 00:18:09.013 "core_mask": "0x2", 00:18:09.013 "workload": "randwrite", 00:18:09.014 "status": "finished", 00:18:09.014 "queue_depth": 128, 00:18:09.014 "io_size": 4096, 00:18:09.014 "runtime": 2.006852, 00:18:09.014 "iops": 17724.774921120243, 00:18:09.014 "mibps": 69.23740203562595, 00:18:09.014 "io_failed": 0, 00:18:09.014 "io_timeout": 0, 00:18:09.014 "avg_latency_us": 7208.413560995806, 00:18:09.014 "min_latency_us": 3842.7927272727275, 00:18:09.014 "max_latency_us": 28359.214545454546 00:18:09.014 } 00:18:09.014 ], 00:18:09.014 "core_count": 1 00:18:09.014 } 00:18:09.014 11:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:09.014 11:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:09.014 11:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:09.014 | .driver_specific 00:18:09.014 | .nvme_error 00:18:09.014 | .status_code 00:18:09.014 | .command_transient_transport_error' 00:18:09.014 11:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80119 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80119 ']' 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80119 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80119 00:18:09.273 killing process with pid 80119 00:18:09.273 Received shutdown signal, test time was about 2.000000 seconds 00:18:09.273 00:18:09.273 Latency(us) 00:18:09.273 [2024-11-15T11:01:56.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.273 [2024-11-15T11:01:56.134Z] =================================================================================================================== 00:18:09.273 [2024-11-15T11:01:56.134Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80119' 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80119 00:18:09.273 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80119 00:18:09.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80173 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80173 /var/tmp/bperf.sock 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80173 ']' 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.532 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:09.533 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:09.533 Zero copy mechanism will not be used. 00:18:09.533 [2024-11-15 11:01:56.297884] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:18:09.533 [2024-11-15 11:01:56.298004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80173 ] 00:18:09.792 [2024-11-15 11:01:56.442400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.792 [2024-11-15 11:01:56.493539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.792 [2024-11-15 11:01:56.550461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:09.792 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.792 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:09.792 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:09.792 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:10.051 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:10.051 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.051 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.309 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.309 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:10.309 11:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:10.568 nvme0n1 00:18:10.568 11:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:10.568 11:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.568 11:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.568 11:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.568 11:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:10.568 11:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:10.568 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:10.568 Zero copy mechanism will not be used. 00:18:10.568 Running I/O for 2 seconds... 00:18:10.568 [2024-11-15 11:01:57.334617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.568 [2024-11-15 11:01:57.334744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.568 [2024-11-15 11:01:57.334778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.568 [2024-11-15 11:01:57.340930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.568 [2024-11-15 11:01:57.341116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.568 [2024-11-15 11:01:57.341141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.568 [2024-11-15 11:01:57.346639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.568 [2024-11-15 11:01:57.346872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.568 [2024-11-15 11:01:57.346900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.568 [2024-11-15 11:01:57.352509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.568 [2024-11-15 11:01:57.352715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.568 [2024-11-15 11:01:57.352753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.568 [2024-11-15 11:01:57.358237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.568 [2024-11-15 11:01:57.358369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.568 [2024-11-15 11:01:57.358390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.568 [2024-11-15 11:01:57.363701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.568 [2024-11-15 11:01:57.363879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.568 [2024-11-15 11:01:57.363904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.568 [2024-11-15 11:01:57.369385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.568 [2024-11-15 11:01:57.369577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.568 [2024-11-15 11:01:57.369602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.568 [2024-11-15 11:01:57.375193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.568 [2024-11-15 11:01:57.375380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.568 [2024-11-15 11:01:57.375403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.568 [2024-11-15 11:01:57.381088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.568 [2024-11-15 11:01:57.381240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.568 [2024-11-15 11:01:57.381263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.568 [2024-11-15 11:01:57.386935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.568 [2024-11-15 11:01:57.387135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.568 [2024-11-15 11:01:57.387156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.568 [2024-11-15 11:01:57.392090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.568 [2024-11-15 11:01:57.392365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.568 [2024-11-15 11:01:57.392418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.569 [2024-11-15 11:01:57.397669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.569 [2024-11-15 11:01:57.397751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.569 [2024-11-15 11:01:57.397789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.569 [2024-11-15 11:01:57.403636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.569 [2024-11-15 11:01:57.403715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.569 [2024-11-15 11:01:57.403752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.569 [2024-11-15 11:01:57.409230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.569 [2024-11-15 11:01:57.409310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.569 [2024-11-15 11:01:57.409332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.569 [2024-11-15 11:01:57.415097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.569 [2024-11-15 11:01:57.415205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.569 [2024-11-15 11:01:57.415225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.569 [2024-11-15 11:01:57.421226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.569 [2024-11-15 11:01:57.421304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.569 [2024-11-15 11:01:57.421325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.569 [2024-11-15 11:01:57.427291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.569 [2024-11-15 11:01:57.427371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.569 [2024-11-15 11:01:57.427394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.433311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.433391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.433430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.439138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.439204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.439226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.445319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.445392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.445429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.450821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.450910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.450931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.456698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.456778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.456798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.462546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.462639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.462674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.468243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.468320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.468341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.473983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.474062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.474082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.479672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.479752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.479772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.485296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.485394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.485431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.491065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.491138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.491160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.496860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.496937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.496971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.502478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.502589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.502614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.508251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.508350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.508371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.513947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.514035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.514057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.519879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.519961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.519985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.525426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.525501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.525523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.531088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.531170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.842 [2024-11-15 11:01:57.531191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.842 [2024-11-15 11:01:57.536762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.842 [2024-11-15 11:01:57.536847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.536868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.542518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.542607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.542641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.548244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.548317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.548337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.553904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.554023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.554044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.559624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.559739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.559762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.565479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.565600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.565624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.571294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.571368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.571389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.577028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.577102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.577124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.582676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.582751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.582773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.588396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.588484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.588507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.594014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.594100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.594121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.599709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.599796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.599859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.605353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.605447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.605469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.611003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.611082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.611103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.616601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.616684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.616707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.622362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.622450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.622472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.627980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.628049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.628073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.633707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.633773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.633825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.639280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.639368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.639389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.645020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.645093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.645113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.650657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.650727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.650751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.656190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.656279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.656299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.661910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.661980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.662001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.667652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.667732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.667770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.673371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.673452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.673483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.679274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.679352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.679373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.685043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.843 [2024-11-15 11:01:57.685112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.843 [2024-11-15 11:01:57.685137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:10.843 [2024-11-15 11:01:57.690962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:10.844 [2024-11-15 11:01:57.691052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.844 [2024-11-15 11:01:57.691081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.696540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.696772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.696841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.702035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.702163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.702189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.707684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.707791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.707829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.713442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.713514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.713540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.719135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.719252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.719274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.724978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.725058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.725080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.730926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.731025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.731047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.736832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.736992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.737014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.742596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.742662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.742698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.748397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.748547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.748571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.754169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.754251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.754284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.760129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.760222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.760259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.765955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.766051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.766071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.771618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.771694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.771716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.777140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.777216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.777236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.782866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.782943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.782967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.788595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.788700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.788724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.794080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.794166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.794187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.799543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.799622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.799643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.805076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.805153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.805175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.811010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.811085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.811107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.817046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.139 [2024-11-15 11:01:57.817153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.139 [2024-11-15 11:01:57.817176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.139 [2024-11-15 11:01:57.822762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.822835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.822859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.828447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.828570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.828594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.834150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.834253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.834275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.839881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.839964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.839987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.845520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.845598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.845621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.851045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.851126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.851158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.856573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.856655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.856678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.862144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.862212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.862236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.867781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.867888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.867912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.873494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.873631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.873655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.879226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.879311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.879332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.885077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.885156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.885176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.891034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.891122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.891145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.896773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.896860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.896882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.902475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.902578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.902601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.908134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.908287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.908311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.913872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.913975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.913998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.919969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.920045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.920070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.925605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.925699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.925722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.931174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.931252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.931275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.937053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.937167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.937189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.942958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.943034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.943056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.948653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.948787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.948809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.954350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.954451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.954474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.960152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.960245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.960279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.965871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.965953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.965976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.971533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.971637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.140 [2024-11-15 11:01:57.971660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.140 [2024-11-15 11:01:57.977238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.140 [2024-11-15 11:01:57.977369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.141 [2024-11-15 11:01:57.977390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.141 [2024-11-15 11:01:57.983120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.141 [2024-11-15 11:01:57.983220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.141 [2024-11-15 11:01:57.983240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.141 [2024-11-15 11:01:57.989048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.141 [2024-11-15 11:01:57.989137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.141 [2024-11-15 11:01:57.989158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.141 [2024-11-15 11:01:57.995002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.141 [2024-11-15 11:01:57.995141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.141 [2024-11-15 11:01:57.995161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.000683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.401 [2024-11-15 11:01:58.000867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.401 [2024-11-15 11:01:58.000887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.006345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.401 [2024-11-15 11:01:58.006441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.401 [2024-11-15 11:01:58.006463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.012207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.401 [2024-11-15 11:01:58.012323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.401 [2024-11-15 11:01:58.012358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.018012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.401 [2024-11-15 11:01:58.018101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.401 [2024-11-15 11:01:58.018122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.023708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.401 [2024-11-15 11:01:58.023863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.401 [2024-11-15 11:01:58.023887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.029707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.401 [2024-11-15 11:01:58.029892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.401 [2024-11-15 11:01:58.029913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.035455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.401 [2024-11-15 11:01:58.035539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.401 [2024-11-15 11:01:58.035561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.041116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.401 [2024-11-15 11:01:58.041274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.401 [2024-11-15 11:01:58.041295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.046760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.401 [2024-11-15 11:01:58.046858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.401 [2024-11-15 11:01:58.046879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.052393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.401 [2024-11-15 11:01:58.052501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.401 [2024-11-15 11:01:58.052523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.058027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.401 [2024-11-15 11:01:58.058119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.401 [2024-11-15 11:01:58.058139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.401 [2024-11-15 11:01:58.063652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.063736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.063758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.069398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.069498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.069519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.075204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.075301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.075321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.081012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.081119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.081140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.086622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.086749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.086813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.092121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.092199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.092220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.097876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.097981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.098001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.103928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.104002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.104025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.109756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.109900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.109920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.115644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.115754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.115777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.121612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.121693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.121717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.127489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.127632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.127656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.133317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.133468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.133490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.139189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.139275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.139307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.144853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.144934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.144955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.150637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.150720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.150770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.156381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.156494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.156516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.162121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.162218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.162242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.167751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.167898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.167922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.173418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.173513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.173536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.178953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.179048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.179068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.184588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.184668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.184705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.190291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.190440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.190462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.196061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.196191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.196213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.201975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.202054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.202074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.207729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.207824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.207847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.213405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.402 [2024-11-15 11:01:58.213560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.402 [2024-11-15 11:01:58.213584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.402 [2024-11-15 11:01:58.219001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.403 [2024-11-15 11:01:58.219135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.403 [2024-11-15 11:01:58.219157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.403 [2024-11-15 11:01:58.224632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.403 [2024-11-15 11:01:58.224731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.403 [2024-11-15 11:01:58.224755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.403 [2024-11-15 11:01:58.230448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.403 [2024-11-15 11:01:58.230634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.403 [2024-11-15 11:01:58.230658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.403 [2024-11-15 11:01:58.236044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.403 [2024-11-15 11:01:58.236224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.403 [2024-11-15 11:01:58.236245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.403 [2024-11-15 11:01:58.242006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.403 [2024-11-15 11:01:58.242192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.403 [2024-11-15 11:01:58.242212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.403 [2024-11-15 11:01:58.247961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.403 [2024-11-15 11:01:58.248211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.403 [2024-11-15 11:01:58.248232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.403 [2024-11-15 11:01:58.253834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.403 [2024-11-15 11:01:58.254018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.403 [2024-11-15 11:01:58.254039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.403 [2024-11-15 11:01:58.259919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.403 [2024-11-15 11:01:58.260069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.403 [2024-11-15 11:01:58.260092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.265473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.265688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.265712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.271335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.271541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.271579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.277099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.277276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.277296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.282883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.283084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.283104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.288735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.288947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.288967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.294518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.294724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.294747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.300530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.300771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.300829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.306194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.306373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.306394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.312125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.312362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.312382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.317922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.318095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.318116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.323535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.323700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.323722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.663 5347.00 IOPS, 668.38 MiB/s [2024-11-15T11:01:58.524Z] [2024-11-15 11:01:58.330383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.330549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.330571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.336397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.336626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.336648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.342267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.342505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.342527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.348051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.348293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.348312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.353707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.353924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.353960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.359334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.359548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.359570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.663 [2024-11-15 11:01:58.365131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.663 [2024-11-15 11:01:58.365321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.663 [2024-11-15 11:01:58.365342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.370916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.371075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.371094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.376750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.377014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.377041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.382704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.382946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.382967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.388653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.388928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.388948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.394359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.394561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.394585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.400057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.400293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.400314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.406035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.406247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.406268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.412227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.412458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.412495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.418269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.418514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.418553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.424292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.424483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.424505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.430103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.430372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.430402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.436055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.436304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.436325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.441878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.442056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.442076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.447483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.447707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.447729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.453312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.453503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.453525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.459004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.459139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.459160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.464804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.465019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.465039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.470507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.470728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.470771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.476398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.476563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.476587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.482202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.482332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.482352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.487871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.488021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.488045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.493464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.493679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.493701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.499054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.499230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.499250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.504831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.505101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.505121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.510451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.510620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.510656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.664 [2024-11-15 11:01:58.516473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.664 [2024-11-15 11:01:58.516638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.664 [2024-11-15 11:01:58.516675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.924 [2024-11-15 11:01:58.522575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.924 [2024-11-15 11:01:58.522764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.924 [2024-11-15 11:01:58.522787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.924 [2024-11-15 11:01:58.528396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.924 [2024-11-15 11:01:58.528616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.924 [2024-11-15 11:01:58.528640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.924 [2024-11-15 11:01:58.534278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.924 [2024-11-15 11:01:58.534410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.924 [2024-11-15 11:01:58.534447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.924 [2024-11-15 11:01:58.540077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.924 [2024-11-15 11:01:58.540259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.924 [2024-11-15 11:01:58.540293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.924 [2024-11-15 11:01:58.545832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.546000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.546022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.551595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.551781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.551801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.557234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.557366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.557387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.562825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.562972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.562992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.568479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.568668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.568689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.573938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.574113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.574133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.579587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.579848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.579872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.585367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.585588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.585611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.590969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.591113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.591133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.596937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.597111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.597131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.602991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.603250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.603272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.608980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.609217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.609239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.614973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.615188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.615210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.620712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.620917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.620953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.626379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.626552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.626589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.631974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.632253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.632295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.637474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.637659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.637680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.642908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.643081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.643101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.648374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.648525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.648561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.653699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.653863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.653898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.659048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.659194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.659213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.664465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.664623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.664656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.669861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.670016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.670036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.675324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.675530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.675552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.681297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.681515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.681536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.687003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.687192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.687215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.692839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.925 [2024-11-15 11:01:58.693001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.925 [2024-11-15 11:01:58.693021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.925 [2024-11-15 11:01:58.698296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.698446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.698466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.703724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.703931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.703952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.709135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.709287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.709307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.714487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.714653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.714674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.720003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.720206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.720228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.725350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.725500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.725520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.730800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.730948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.730969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.736258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.736464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.736485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.741680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.741832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.741853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.747171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.747320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.747340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.752695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.752957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.752978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.758601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.758837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.758858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.764178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.764454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.764482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.769884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.770074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.770094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.775714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.776000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.776022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:11.926 [2024-11-15 11:01:58.781492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:11.926 [2024-11-15 11:01:58.781710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.926 [2024-11-15 11:01:58.781746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.787231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.787403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.787440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.793031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.793238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.793261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.798785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.799027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.799050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.804353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.804602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.804625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.810104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.810280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.810300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.815889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.816073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.816106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.821590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.821789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.821812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.827228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.827394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.827432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.833056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.833256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.833277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.838649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.838887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.838907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.844263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.844475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.844496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.849854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.850005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.850025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.855230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.855384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.855405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.860682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.860902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.860924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.866165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.866304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.866325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.871779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.871971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.871993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.877173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.877319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.877339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.882428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.882616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.882635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.887851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.888007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.888029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.893184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.893318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.893337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.898467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.186 [2024-11-15 11:01:58.898628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.186 [2024-11-15 11:01:58.898648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.186 [2024-11-15 11:01:58.903755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.903949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.903970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.909062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.909235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.909256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.914313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.914513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.914533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.919756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.919920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.919941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.925162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.925316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.925335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.930413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.930601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.930622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.935871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.936060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.936081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.941346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.941494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.941515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.946754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.946972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.946992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.952592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.952820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.952841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.958501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.958742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.958772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.964350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.964494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.964516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.970166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.970320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.970342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.975776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.975992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.976014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.981316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.981505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.981526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.987007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.987279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.987301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.992815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.993024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.993047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:58.998673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:58.998958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:58.998980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:59.004222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:59.004397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:59.004419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:59.009777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:59.009929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:59.009949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:59.015292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:59.015462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:59.015483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:59.020760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:59.020907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:59.020943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:59.026018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:59.026133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:59.026152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:59.031265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:59.031394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:59.031413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:59.036636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:59.036801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:59.036822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.187 [2024-11-15 11:01:59.041932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.187 [2024-11-15 11:01:59.042143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.187 [2024-11-15 11:01:59.042162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.447 [2024-11-15 11:01:59.047324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.447 [2024-11-15 11:01:59.047492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.447 [2024-11-15 11:01:59.047512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.447 [2024-11-15 11:01:59.052679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.447 [2024-11-15 11:01:59.052853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.447 [2024-11-15 11:01:59.052872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.447 [2024-11-15 11:01:59.058257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.447 [2024-11-15 11:01:59.058459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.447 [2024-11-15 11:01:59.058482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.447 [2024-11-15 11:01:59.064436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.447 [2024-11-15 11:01:59.064654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.447 [2024-11-15 11:01:59.064677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.447 [2024-11-15 11:01:59.070231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.447 [2024-11-15 11:01:59.070379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.447 [2024-11-15 11:01:59.070401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.447 [2024-11-15 11:01:59.076119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.447 [2024-11-15 11:01:59.076363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.447 [2024-11-15 11:01:59.076384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.447 [2024-11-15 11:01:59.081791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.081987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.082024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.087472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.087678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.087698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.093151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.093287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.093307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.098677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.098858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.098877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.104208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.104401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.104436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.109843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.110015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.110035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.115375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.115550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.115587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.121266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.121488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.121510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.127240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.127423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.127461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.133194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.133372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.133392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.139033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.139204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.139224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.144725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.144954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.144973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.150322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.150533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.150556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.156001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.156248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.156269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.161571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.161766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.161787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.167282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.167467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.167489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.172881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.173106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.173127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.178249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.178378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.178398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.184211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.184458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.184499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.190023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.190175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.190197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.195566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.195744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.195764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.200986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.201140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.201160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.206352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.206521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.206552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.211732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.211929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.211951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.217175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.217321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.217340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.222453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.222616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.222637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.227795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.227995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.448 [2024-11-15 11:01:59.228016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.448 [2024-11-15 11:01:59.233207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.448 [2024-11-15 11:01:59.233403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.233454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.238447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.238643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.238664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.243951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.244310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.244340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.249822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.250123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.250152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.255590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.255912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.255940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.261290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.261600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.261639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.267060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.267328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.267354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.272908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.273154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.273174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.278599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.278971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.278997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.284333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.284604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.284640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.289916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.290161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.290186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.295479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.295899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.295942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.449 [2024-11-15 11:01:59.301377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.449 [2024-11-15 11:01:59.301667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.449 [2024-11-15 11:01:59.301693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.708 [2024-11-15 11:01:59.306952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.708 [2024-11-15 11:01:59.307236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.708 [2024-11-15 11:01:59.307262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:12.708 [2024-11-15 11:01:59.312642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.708 [2024-11-15 11:01:59.312945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.708 [2024-11-15 11:01:59.312970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:12.708 [2024-11-15 11:01:59.318431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.708 [2024-11-15 11:01:59.318749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.708 [2024-11-15 11:01:59.318775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:12.708 [2024-11-15 11:01:59.324449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xff4a90) with pdu=0x2000166ff3c8 00:18:12.708 [2024-11-15 11:01:59.324752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.708 [2024-11-15 11:01:59.324777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:12.708 5414.00 IOPS, 676.75 MiB/s 00:18:12.708 Latency(us) 00:18:12.708 [2024-11-15T11:01:59.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.708 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:12.708 nvme0n1 : 2.00 5412.78 676.60 0.00 0.00 2949.82 2219.29 11915.64 00:18:12.708 [2024-11-15T11:01:59.569Z] =================================================================================================================== 00:18:12.708 [2024-11-15T11:01:59.569Z] Total : 5412.78 676.60 0.00 0.00 2949.82 2219.29 11915.64 00:18:12.708 { 00:18:12.708 "results": [ 00:18:12.708 { 00:18:12.708 "job": "nvme0n1", 00:18:12.708 "core_mask": "0x2", 00:18:12.708 "workload": "randwrite", 00:18:12.708 "status": "finished", 00:18:12.708 "queue_depth": 16, 00:18:12.708 "io_size": 131072, 00:18:12.708 "runtime": 2.003222, 00:18:12.708 "iops": 5412.780011401632, 00:18:12.708 "mibps": 676.597501425204, 00:18:12.708 "io_failed": 0, 00:18:12.708 "io_timeout": 0, 00:18:12.708 "avg_latency_us": 2949.82331156255, 00:18:12.708 "min_latency_us": 2219.287272727273, 00:18:12.708 "max_latency_us": 11915.636363636364 00:18:12.708 } 00:18:12.708 ], 00:18:12.708 "core_count": 1 00:18:12.708 } 00:18:12.708 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:12.708 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:12.708 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:12.708 | .driver_specific 00:18:12.708 | .nvme_error 00:18:12.708 | .status_code 00:18:12.708 | .command_transient_transport_error' 00:18:12.708 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 350 > 0 )) 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80173 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80173 ']' 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80173 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80173 00:18:12.968 killing process with pid 80173 00:18:12.968 Received shutdown signal, test time was about 2.000000 seconds 00:18:12.968 00:18:12.968 Latency(us) 00:18:12.968 [2024-11-15T11:01:59.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.968 [2024-11-15T11:01:59.829Z] =================================================================================================================== 00:18:12.968 [2024-11-15T11:01:59.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80173' 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80173 00:18:12.968 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80173 00:18:13.227 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79983 00:18:13.227 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 79983 ']' 00:18:13.227 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 79983 00:18:13.227 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:13.227 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.227 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79983 00:18:13.227 killing process with pid 79983 00:18:13.227 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.227 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.227 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79983' 00:18:13.227 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 79983 00:18:13.227 11:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 79983 00:18:13.486 00:18:13.486 real 0m16.097s 00:18:13.486 user 0m30.428s 00:18:13.486 sys 0m5.470s 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:13.486 ************************************ 00:18:13.486 END TEST nvmf_digest_error 00:18:13.486 ************************************ 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:13.486 rmmod nvme_tcp 00:18:13.486 rmmod nvme_fabrics 00:18:13.486 rmmod nvme_keyring 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79983 ']' 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79983 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 79983 ']' 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 79983 00:18:13.486 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79983) - No such process 00:18:13.486 Process with pid 79983 is not found 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 79983 is not found' 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:13.486 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:13.745 00:18:13.745 real 0m33.826s 00:18:13.745 user 1m2.081s 00:18:13.745 sys 0m10.929s 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:13.745 ************************************ 00:18:13.745 END TEST nvmf_digest 00:18:13.745 ************************************ 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.745 ************************************ 00:18:13.745 START TEST nvmf_host_multipath 00:18:13.745 ************************************ 00:18:13.745 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:14.003 * Looking for test storage... 00:18:14.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:14.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.003 --rc genhtml_branch_coverage=1 00:18:14.003 --rc genhtml_function_coverage=1 00:18:14.003 --rc genhtml_legend=1 00:18:14.003 --rc geninfo_all_blocks=1 00:18:14.003 --rc geninfo_unexecuted_blocks=1 00:18:14.003 00:18:14.003 ' 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:14.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.003 --rc genhtml_branch_coverage=1 00:18:14.003 --rc genhtml_function_coverage=1 00:18:14.003 --rc genhtml_legend=1 00:18:14.003 --rc geninfo_all_blocks=1 00:18:14.003 --rc geninfo_unexecuted_blocks=1 00:18:14.003 00:18:14.003 ' 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:14.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.003 --rc genhtml_branch_coverage=1 00:18:14.003 --rc genhtml_function_coverage=1 00:18:14.003 --rc genhtml_legend=1 00:18:14.003 --rc geninfo_all_blocks=1 00:18:14.003 --rc geninfo_unexecuted_blocks=1 00:18:14.003 00:18:14.003 ' 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:14.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.003 --rc genhtml_branch_coverage=1 00:18:14.003 --rc genhtml_function_coverage=1 00:18:14.003 --rc genhtml_legend=1 00:18:14.003 --rc geninfo_all_blocks=1 00:18:14.003 --rc geninfo_unexecuted_blocks=1 00:18:14.003 00:18:14.003 ' 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.003 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:14.004 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:14.004 Cannot find device "nvmf_init_br" 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:14.004 Cannot find device "nvmf_init_br2" 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:14.004 Cannot find device "nvmf_tgt_br" 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.004 Cannot find device "nvmf_tgt_br2" 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:14.004 Cannot find device "nvmf_init_br" 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:14.004 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:14.263 Cannot find device "nvmf_init_br2" 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:14.263 Cannot find device "nvmf_tgt_br" 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:14.263 Cannot find device "nvmf_tgt_br2" 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:14.263 Cannot find device "nvmf_br" 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:14.263 Cannot find device "nvmf_init_if" 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:14.263 Cannot find device "nvmf_init_if2" 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.263 11:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.263 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:14.522 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.522 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:18:14.522 00:18:14.522 --- 10.0.0.3 ping statistics --- 00:18:14.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.522 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:14.522 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:14.522 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:18:14.522 00:18:14.522 --- 10.0.0.4 ping statistics --- 00:18:14.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.522 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:14.522 00:18:14.522 --- 10.0.0.1 ping statistics --- 00:18:14.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.522 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:14.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:18:14.522 00:18:14.522 --- 10.0.0.2 ping statistics --- 00:18:14.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.522 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80477 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80477 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80477 ']' 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.522 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.523 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.523 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.523 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:14.523 [2024-11-15 11:02:01.314407] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:18:14.523 [2024-11-15 11:02:01.314495] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.781 [2024-11-15 11:02:01.460635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:14.781 [2024-11-15 11:02:01.507049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.781 [2024-11-15 11:02:01.507124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.781 [2024-11-15 11:02:01.507136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.781 [2024-11-15 11:02:01.507144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.781 [2024-11-15 11:02:01.507151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.781 [2024-11-15 11:02:01.508431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.781 [2024-11-15 11:02:01.508449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.781 [2024-11-15 11:02:01.578834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:15.040 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.040 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:15.040 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.040 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.040 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:15.040 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.040 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80477 00:18:15.040 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:15.298 [2024-11-15 11:02:01.978888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.298 11:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:15.556 Malloc0 00:18:15.556 11:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:15.814 11:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:16.073 11:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:16.330 [2024-11-15 11:02:03.108353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:16.330 11:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:16.589 [2024-11-15 11:02:03.372619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:16.589 11:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80531 00:18:16.589 11:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:16.589 11:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.589 11:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80531 /var/tmp/bdevperf.sock 00:18:16.589 11:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80531 ']' 00:18:16.589 11:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.589 11:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.589 11:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.589 11:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.589 11:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:17.966 11:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.966 11:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:17.966 11:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:17.966 11:02:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:18.224 Nvme0n1 00:18:18.224 11:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:18.791 Nvme0n1 00:18:18.791 11:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:18.791 11:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:19.727 11:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:19.727 11:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:19.986 11:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:20.244 11:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:20.244 11:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80477 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:20.244 11:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80576 00:18:20.244 11:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:26.810 11:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:26.810 11:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:26.810 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:26.810 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:26.811 Attaching 4 probes... 00:18:26.811 @path[10.0.0.3, 4421]: 17832 00:18:26.811 @path[10.0.0.3, 4421]: 18272 00:18:26.811 @path[10.0.0.3, 4421]: 17312 00:18:26.811 @path[10.0.0.3, 4421]: 17615 00:18:26.811 @path[10.0.0.3, 4421]: 17676 00:18:26.811 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:26.811 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:26.811 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:26.811 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:26.811 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:26.811 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:26.811 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80576 00:18:26.811 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:26.811 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:26.811 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:26.811 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:27.070 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:27.070 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80690 00:18:27.070 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80477 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:27.070 11:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:33.637 11:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:33.637 11:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:33.637 Attaching 4 probes... 00:18:33.637 @path[10.0.0.3, 4420]: 18452 00:18:33.637 @path[10.0.0.3, 4420]: 18760 00:18:33.637 @path[10.0.0.3, 4420]: 18656 00:18:33.637 @path[10.0.0.3, 4420]: 18325 00:18:33.637 @path[10.0.0.3, 4420]: 18524 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80690 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:33.637 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:33.896 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:33.896 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80802 00:18:33.896 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80477 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:33.896 11:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:40.460 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:40.460 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:40.460 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:40.460 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:40.460 Attaching 4 probes... 00:18:40.460 @path[10.0.0.3, 4421]: 14040 00:18:40.460 @path[10.0.0.3, 4421]: 17504 00:18:40.460 @path[10.0.0.3, 4421]: 17920 00:18:40.460 @path[10.0.0.3, 4421]: 17776 00:18:40.460 @path[10.0.0.3, 4421]: 17440 00:18:40.460 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:40.461 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:40.461 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:40.461 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:40.461 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:40.461 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:40.461 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80802 00:18:40.461 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:40.461 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:40.461 11:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:40.461 11:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:40.720 11:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:40.720 11:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80477 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:40.720 11:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80920 00:18:40.720 11:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:47.291 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:47.291 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:47.291 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:47.291 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:47.291 Attaching 4 probes... 00:18:47.291 00:18:47.291 00:18:47.291 00:18:47.291 00:18:47.291 00:18:47.291 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:47.291 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:47.291 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:47.291 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:47.291 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:47.292 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:47.292 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80920 00:18:47.292 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:47.292 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:47.292 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:47.292 11:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:47.550 11:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:47.550 11:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80477 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:47.550 11:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81039 00:18:47.550 11:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:54.118 Attaching 4 probes... 00:18:54.118 @path[10.0.0.3, 4421]: 17337 00:18:54.118 @path[10.0.0.3, 4421]: 17785 00:18:54.118 @path[10.0.0.3, 4421]: 18246 00:18:54.118 @path[10.0.0.3, 4421]: 17964 00:18:54.118 @path[10.0.0.3, 4421]: 17664 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81039 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:54.118 [2024-11-15 11:02:40.819099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899890 is same with the state(6) to be set 00:18:54.118 [2024-11-15 11:02:40.819147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899890 is same with the state(6) to be set 00:18:54.118 [2024-11-15 11:02:40.819183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899890 is same with the state(6) to be set 00:18:54.118 11:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:55.056 11:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:55.056 11:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81158 00:18:55.056 11:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80477 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:55.056 11:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:01.624 11:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:01.624 11:02:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.624 Attaching 4 probes... 00:19:01.624 @path[10.0.0.3, 4420]: 17372 00:19:01.624 @path[10.0.0.3, 4420]: 17339 00:19:01.624 @path[10.0.0.3, 4420]: 18140 00:19:01.624 @path[10.0.0.3, 4420]: 18927 00:19:01.624 @path[10.0.0.3, 4420]: 17798 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81158 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:01.624 [2024-11-15 11:02:48.356698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:01.624 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:01.883 11:02:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:08.528 11:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:08.528 11:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81338 00:19:08.528 11:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80477 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:08.528 11:02:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:13.793 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:13.793 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.357 Attaching 4 probes... 00:19:14.357 @path[10.0.0.3, 4421]: 17420 00:19:14.357 @path[10.0.0.3, 4421]: 17744 00:19:14.357 @path[10.0.0.3, 4421]: 17670 00:19:14.357 @path[10.0.0.3, 4421]: 17941 00:19:14.357 @path[10.0.0.3, 4421]: 17491 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81338 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80531 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80531 ']' 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80531 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80531 00:19:14.357 killing process with pid 80531 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80531' 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80531 00:19:14.357 11:03:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80531 00:19:14.357 { 00:19:14.357 "results": [ 00:19:14.357 { 00:19:14.357 "job": "Nvme0n1", 00:19:14.357 "core_mask": "0x4", 00:19:14.357 "workload": "verify", 00:19:14.357 "status": "terminated", 00:19:14.357 "verify_range": { 00:19:14.357 "start": 0, 00:19:14.357 "length": 16384 00:19:14.357 }, 00:19:14.357 "queue_depth": 128, 00:19:14.357 "io_size": 4096, 00:19:14.357 "runtime": 55.464116, 00:19:14.357 "iops": 7667.101374156941, 00:19:14.357 "mibps": 29.949614742800552, 00:19:14.357 "io_failed": 0, 00:19:14.357 "io_timeout": 0, 00:19:14.357 "avg_latency_us": 16665.277865293465, 00:19:14.357 "min_latency_us": 1802.24, 00:19:14.357 "max_latency_us": 7015926.69090909 00:19:14.357 } 00:19:14.357 ], 00:19:14.357 "core_count": 1 00:19:14.357 } 00:19:14.623 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80531 00:19:14.623 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:14.623 [2024-11-15 11:02:03.449651] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:19:14.623 [2024-11-15 11:02:03.450247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80531 ] 00:19:14.623 [2024-11-15 11:02:03.597399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.624 [2024-11-15 11:02:03.648098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.624 [2024-11-15 11:02:03.700394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:14.624 Running I/O for 90 seconds... 00:19:14.624 9003.00 IOPS, 35.17 MiB/s [2024-11-15T11:03:01.485Z] 9069.00 IOPS, 35.43 MiB/s [2024-11-15T11:03:01.485Z] 9075.33 IOPS, 35.45 MiB/s [2024-11-15T11:03:01.485Z] 9104.50 IOPS, 35.56 MiB/s [2024-11-15T11:03:01.485Z] 9013.20 IOPS, 35.21 MiB/s [2024-11-15T11:03:01.485Z] 8983.00 IOPS, 35.09 MiB/s [2024-11-15T11:03:01.485Z] 8958.00 IOPS, 34.99 MiB/s [2024-11-15T11:03:01.485Z] 8923.25 IOPS, 34.86 MiB/s [2024-11-15T11:03:01.485Z] [2024-11-15 11:02:13.742811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.624 [2024-11-15 11:02:13.742886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.742955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.624 [2024-11-15 11:02:13.742974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.742995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.624 [2024-11-15 11:02:13.743009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.624 [2024-11-15 11:02:13.743042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.624 [2024-11-15 11:02:13.743075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.624 [2024-11-15 11:02:13.743107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.624 [2024-11-15 11:02:13.743140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.624 [2024-11-15 11:02:13.743173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.624 [2024-11-15 11:02:13.743768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.624 [2024-11-15 11:02:13.743783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.743803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.625 [2024-11-15 11:02:13.743844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.743866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.625 [2024-11-15 11:02:13.743881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.743909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.625 [2024-11-15 11:02:13.743937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.743977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.625 [2024-11-15 11:02:13.744006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.625 [2024-11-15 11:02:13.744072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.625 [2024-11-15 11:02:13.744145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.625 [2024-11-15 11:02:13.744227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.625 [2024-11-15 11:02:13.744297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.744975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.744991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.745011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.745025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.745045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.745059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.625 [2024-11-15 11:02:13.745079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.625 [2024-11-15 11:02:13.745094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.626 [2024-11-15 11:02:13.745328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.626 [2024-11-15 11:02:13.745362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.626 [2024-11-15 11:02:13.745401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.626 [2024-11-15 11:02:13.745468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.626 [2024-11-15 11:02:13.745504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.626 [2024-11-15 11:02:13.745539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.626 [2024-11-15 11:02:13.745573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.626 [2024-11-15 11:02:13.745622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.745931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.745987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.746006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.746028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.746042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.746062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.746077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.746096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.626 [2024-11-15 11:02:13.746111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.626 [2024-11-15 11:02:13.746131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.746341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.746378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.746422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.746484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.746522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.746559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.746597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.746664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.746976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.627 [2024-11-15 11:02:13.746999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.747020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.747034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.747054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.747068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.747087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.747101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.747120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.747134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.747153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.747167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.747186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.747200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.747220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.627 [2024-11-15 11:02:13.747233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.627 [2024-11-15 11:02:13.747252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.628 [2024-11-15 11:02:13.747266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.747286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.628 [2024-11-15 11:02:13.747300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.747320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.628 [2024-11-15 11:02:13.747334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.747354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.628 [2024-11-15 11:02:13.747367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.747387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.628 [2024-11-15 11:02:13.747407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.747473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.628 [2024-11-15 11:02:13.747492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.747514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.628 [2024-11-15 11:02:13.747530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.747551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.628 [2024-11-15 11:02:13.747566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.749264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.628 [2024-11-15 11:02:13.749300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.749328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.749343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.749363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.749377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.749397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.749412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.749479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.749498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.749519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.749534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.749554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.749569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.749590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.749606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.750079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.750115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.750141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.750158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.750178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.750192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.750212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.750226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.750261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.750274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.750293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.750307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.750325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.750339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.750358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.750372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:13.750394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:13.750416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.628 8936.78 IOPS, 34.91 MiB/s [2024-11-15T11:03:01.489Z] 8984.30 IOPS, 35.09 MiB/s [2024-11-15T11:03:01.489Z] 9017.00 IOPS, 35.22 MiB/s [2024-11-15T11:03:01.489Z] 9039.58 IOPS, 35.31 MiB/s [2024-11-15T11:03:01.489Z] 9052.23 IOPS, 35.36 MiB/s [2024-11-15T11:03:01.489Z] 9070.50 IOPS, 35.43 MiB/s [2024-11-15T11:03:01.489Z] [2024-11-15 11:02:20.285196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:20.285253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.628 [2024-11-15 11:02:20.285320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.628 [2024-11-15 11:02:20.285339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.629 [2024-11-15 11:02:20.285373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.629 [2024-11-15 11:02:20.285463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.285957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.285980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.629 [2024-11-15 11:02:20.286367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.629 [2024-11-15 11:02:20.286400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.629 [2024-11-15 11:02:20.286434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.630 [2024-11-15 11:02:20.286464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.630 [2024-11-15 11:02:20.286499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.630 [2024-11-15 11:02:20.286540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.630 [2024-11-15 11:02:20.286598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.630 [2024-11-15 11:02:20.286648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.630 [2024-11-15 11:02:20.286685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.630 [2024-11-15 11:02:20.286721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.630 [2024-11-15 11:02:20.286755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.630 [2024-11-15 11:02:20.286790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.286869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.286901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.286962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.286981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.286995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.287027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.287059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.287091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.287122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.630 [2024-11-15 11:02:20.287154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.630 [2024-11-15 11:02:20.287186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.287218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.287251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.287284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.287316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.287356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.287388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.630 [2024-11-15 11:02:20.287406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.630 [2024-11-15 11:02:20.287436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-11-15 11:02:20.287487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.287541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.287595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.287647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.287688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.287725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.287762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.287799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.287850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-11-15 11:02:20.287910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-11-15 11:02:20.287953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.287974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-11-15 11:02:20.287990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-11-15 11:02:20.288027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-11-15 11:02:20.288064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-11-15 11:02:20.288116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-11-15 11:02:20.288151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.631 [2024-11-15 11:02:20.288200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.631 [2024-11-15 11:02:20.288913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.631 [2024-11-15 11:02:20.288931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.288945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.288973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.288987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-11-15 11:02:20.289516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-11-15 11:02:20.289552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-11-15 11:02:20.289589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-11-15 11:02:20.289638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-11-15 11:02:20.289676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-11-15 11:02:20.289713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-11-15 11:02:20.289749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.632 [2024-11-15 11:02:20.289816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.289968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.289987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.290001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.290019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.290032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.290051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.290064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.290083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.290096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.632 [2024-11-15 11:02:20.290115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.632 [2024-11-15 11:02:20.290128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:20.290147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:20.290160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:20.290178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:20.290192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:20.290210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:20.290223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:20.290242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:20.290255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:20.290274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:20.290287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:20.290306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:20.290325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:20.290345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:20.290359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:20.290866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:20.290892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.633 8940.47 IOPS, 34.92 MiB/s [2024-11-15T11:03:01.494Z] 8508.25 IOPS, 33.24 MiB/s [2024-11-15T11:03:01.494Z] 8520.71 IOPS, 33.28 MiB/s [2024-11-15T11:03:01.494Z] 8543.78 IOPS, 33.37 MiB/s [2024-11-15T11:03:01.494Z] 8571.79 IOPS, 33.48 MiB/s [2024-11-15T11:03:01.494Z] 8583.80 IOPS, 33.53 MiB/s [2024-11-15T11:03:01.494Z] 8604.76 IOPS, 33.61 MiB/s [2024-11-15T11:03:01.494Z] [2024-11-15 11:02:27.343794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.343875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.343946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.343967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.343990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.633 [2024-11-15 11:02:27.344562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:27.344595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:27.344643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:27.344680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:27.344713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:27.344745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:27.344788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:27.344824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:27.344871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.633 [2024-11-15 11:02:27.344890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.633 [2024-11-15 11:02:27.344920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.344938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.344952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.344970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.344984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-11-15 11:02:27.345191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-11-15 11:02:27.345234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-11-15 11:02:27.345268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-11-15 11:02:27.345299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-11-15 11:02:27.345331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-11-15 11:02:27.345362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-11-15 11:02:27.345394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.634 [2024-11-15 11:02:27.345426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.345968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.345981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.346000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.634 [2024-11-15 11:02:27.346012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:14.634 [2024-11-15 11:02:27.346040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-11-15 11:02:27.346054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-11-15 11:02:27.346085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-11-15 11:02:27.346117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-11-15 11:02:27.346148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-11-15 11:02:27.346179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.635 [2024-11-15 11:02:27.346211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.346967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.346981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.347000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.347013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.347032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.347045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:14.635 [2024-11-15 11:02:27.347064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.635 [2024-11-15 11:02:27.347078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.347110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.347143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.347175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.347208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.347240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.347272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.347310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.347344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.347941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.347976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.347998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.348013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.348033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.348048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.348068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.348083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.348103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.348118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.348153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.348190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.348210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.348224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.348959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.636 [2024-11-15 11:02:27.348985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.349015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.349030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:14.636 [2024-11-15 11:02:27.349055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.636 [2024-11-15 11:02:27.349069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:27.349094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:27.349108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:27.349133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:27.349146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:27.349171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:27.349185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:27.349210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:27.349223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:27.349248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:27.349262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:27.349308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:27.349328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.637 8535.82 IOPS, 33.34 MiB/s [2024-11-15T11:03:01.498Z] 8164.70 IOPS, 31.89 MiB/s [2024-11-15T11:03:01.498Z] 7824.50 IOPS, 30.56 MiB/s [2024-11-15T11:03:01.498Z] 7511.52 IOPS, 29.34 MiB/s [2024-11-15T11:03:01.498Z] 7222.62 IOPS, 28.21 MiB/s [2024-11-15T11:03:01.498Z] 6955.11 IOPS, 27.17 MiB/s [2024-11-15T11:03:01.498Z] 6706.71 IOPS, 26.20 MiB/s [2024-11-15T11:03:01.498Z] 6524.93 IOPS, 25.49 MiB/s [2024-11-15T11:03:01.498Z] 6595.17 IOPS, 25.76 MiB/s [2024-11-15T11:03:01.498Z] 6672.23 IOPS, 26.06 MiB/s [2024-11-15T11:03:01.498Z] 6748.47 IOPS, 26.36 MiB/s [2024-11-15T11:03:01.498Z] 6814.76 IOPS, 26.62 MiB/s [2024-11-15T11:03:01.498Z] 6874.32 IOPS, 26.85 MiB/s [2024-11-15T11:03:01.498Z] 6930.26 IOPS, 27.07 MiB/s [2024-11-15T11:03:01.498Z] [2024-11-15 11:02:40.819571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:40.819663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.819731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:40.819751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.819832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:40.819859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.819879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:40.819893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.819912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:40.819926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.819945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:40.819959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.819978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:40.819992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:40.820029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.637 [2024-11-15 11:02:40.820469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:40.820500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:40.820537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.637 [2024-11-15 11:02:40.820568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.637 [2024-11-15 11:02:40.820582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.820983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.820995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.821021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.821055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.821083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.821110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.821137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.638 [2024-11-15 11:02:40.821163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-11-15 11:02:40.821189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-11-15 11:02:40.821215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-11-15 11:02:40.821240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-11-15 11:02:40.821267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-11-15 11:02:40.821293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-11-15 11:02:40.821320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-11-15 11:02:40.821347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-11-15 11:02:40.821373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-11-15 11:02:40.821406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-11-15 11:02:40.821445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.638 [2024-11-15 11:02:40.821471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.638 [2024-11-15 11:02:40.821485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.821842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-11-15 11:02:40.821868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-11-15 11:02:40.821895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-11-15 11:02:40.821920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-11-15 11:02:40.821946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-11-15 11:02:40.821974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.821988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-11-15 11:02:40.822001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-11-15 11:02:40.822027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-11-15 11:02:40.822053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.822079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.822112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.822138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.822165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.822191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.822217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.822243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.639 [2024-11-15 11:02:40.822269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-11-15 11:02:40.822295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-11-15 11:02:40.822322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.639 [2024-11-15 11:02:40.822349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.639 [2024-11-15 11:02:40.822362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.640 [2024-11-15 11:02:40.822744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.640 [2024-11-15 11:02:40.822784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.640 [2024-11-15 11:02:40.822810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.640 [2024-11-15 11:02:40.822844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.640 [2024-11-15 11:02:40.822876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.640 [2024-11-15 11:02:40.822903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.640 [2024-11-15 11:02:40.822929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.640 [2024-11-15 11:02:40.822955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.822968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f3290 is same with the state(6) to be set 00:19:14.640 [2024-11-15 11:02:40.822984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.640 [2024-11-15 11:02:40.822994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.640 [2024-11-15 11:02:40.823004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127024 len:8 PRP1 0x0 PRP2 0x0 00:19:14.640 [2024-11-15 11:02:40.823016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.823029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.640 [2024-11-15 11:02:40.823038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.640 [2024-11-15 11:02:40.823047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127480 len:8 PRP1 0x0 PRP2 0x0 00:19:14.640 [2024-11-15 11:02:40.823065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.823078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.640 [2024-11-15 11:02:40.823087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.640 [2024-11-15 11:02:40.823096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127488 len:8 PRP1 0x0 PRP2 0x0 00:19:14.640 [2024-11-15 11:02:40.823108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.823119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.640 [2024-11-15 11:02:40.823129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.640 [2024-11-15 11:02:40.823138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127496 len:8 PRP1 0x0 PRP2 0x0 00:19:14.640 [2024-11-15 11:02:40.823150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.823162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.640 [2024-11-15 11:02:40.823171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.640 [2024-11-15 11:02:40.823190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127504 len:8 PRP1 0x0 PRP2 0x0 00:19:14.640 [2024-11-15 11:02:40.823203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.823225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.640 [2024-11-15 11:02:40.823235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.640 [2024-11-15 11:02:40.823244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127512 len:8 PRP1 0x0 PRP2 0x0 00:19:14.640 [2024-11-15 11:02:40.823256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.823269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.640 [2024-11-15 11:02:40.823278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.640 [2024-11-15 11:02:40.823287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127520 len:8 PRP1 0x0 PRP2 0x0 00:19:14.640 [2024-11-15 11:02:40.823298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.823311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.640 [2024-11-15 11:02:40.823320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.640 [2024-11-15 11:02:40.823329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127528 len:8 PRP1 0x0 PRP2 0x0 00:19:14.640 [2024-11-15 11:02:40.823341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.640 [2024-11-15 11:02:40.823353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.640 [2024-11-15 11:02:40.823362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.640 [2024-11-15 11:02:40.823371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127536 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127544 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127552 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127560 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127568 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127576 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127584 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127592 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127600 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127608 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127616 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127624 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.823954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.641 [2024-11-15 11:02:40.823963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.641 [2024-11-15 11:02:40.823973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127632 len:8 PRP1 0x0 PRP2 0x0 00:19:14.641 [2024-11-15 11:02:40.823985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.824177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.641 [2024-11-15 11:02:40.824203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.824218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.641 [2024-11-15 11:02:40.824230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.824243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.641 [2024-11-15 11:02:40.824255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.824267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:14.641 [2024-11-15 11:02:40.824279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.824293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.641 [2024-11-15 11:02:40.824305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.641 [2024-11-15 11:02:40.824324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85fe50 is same with the state(6) to be set 00:19:14.641 [2024-11-15 11:02:40.825342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:14.641 [2024-11-15 11:02:40.825379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85fe50 (9): Bad file descriptor 00:19:14.641 [2024-11-15 11:02:40.825799] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.641 [2024-11-15 11:02:40.825830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85fe50 with addr=10.0.0.3, port=4421 00:19:14.641 [2024-11-15 11:02:40.825846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85fe50 is same with the state(6) to be set 00:19:14.641 [2024-11-15 11:02:40.825907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85fe50 (9): Bad file descriptor 00:19:14.641 [2024-11-15 11:02:40.825940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:14.641 [2024-11-15 11:02:40.825955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:14.641 [2024-11-15 11:02:40.825969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:14.641 [2024-11-15 11:02:40.825984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:14.641 [2024-11-15 11:02:40.825998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:14.641 6981.33 IOPS, 27.27 MiB/s [2024-11-15T11:03:01.502Z] 7026.38 IOPS, 27.45 MiB/s [2024-11-15T11:03:01.502Z] 7074.95 IOPS, 27.64 MiB/s [2024-11-15T11:03:01.502Z] 7109.95 IOPS, 27.77 MiB/s [2024-11-15T11:03:01.502Z] 7168.00 IOPS, 28.00 MiB/s [2024-11-15T11:03:01.502Z] 7223.41 IOPS, 28.22 MiB/s [2024-11-15T11:03:01.502Z] 7262.67 IOPS, 28.37 MiB/s [2024-11-15T11:03:01.502Z] 7309.02 IOPS, 28.55 MiB/s [2024-11-15T11:03:01.502Z] 7350.55 IOPS, 28.71 MiB/s [2024-11-15T11:03:01.502Z] 7389.69 IOPS, 28.87 MiB/s [2024-11-15T11:03:01.502Z] [2024-11-15 11:02:50.898249] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:14.641 7424.52 IOPS, 29.00 MiB/s [2024-11-15T11:03:01.502Z] 7460.13 IOPS, 29.14 MiB/s [2024-11-15T11:03:01.502Z] 7491.17 IOPS, 29.26 MiB/s [2024-11-15T11:03:01.502Z] 7521.06 IOPS, 29.38 MiB/s [2024-11-15T11:03:01.502Z] 7543.22 IOPS, 29.47 MiB/s [2024-11-15T11:03:01.502Z] 7569.96 IOPS, 29.57 MiB/s [2024-11-15T11:03:01.502Z] 7594.94 IOPS, 29.67 MiB/s [2024-11-15T11:03:01.502Z] 7618.85 IOPS, 29.76 MiB/s [2024-11-15T11:03:01.502Z] 7640.61 IOPS, 29.85 MiB/s [2024-11-15T11:03:01.502Z] 7662.27 IOPS, 29.93 MiB/s [2024-11-15T11:03:01.502Z] Received shutdown signal, test time was about 55.464783 seconds 00:19:14.641 00:19:14.641 Latency(us) 00:19:14.641 [2024-11-15T11:03:01.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.641 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.641 Verification LBA range: start 0x0 length 0x4000 00:19:14.641 Nvme0n1 : 55.46 7667.10 29.95 0.00 0.00 16665.28 1802.24 7015926.69 00:19:14.641 [2024-11-15T11:03:01.502Z] =================================================================================================================== 00:19:14.641 [2024-11-15T11:03:01.502Z] Total : 7667.10 29.95 0.00 0.00 16665.28 1802.24 7015926.69 00:19:14.641 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.899 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:14.900 rmmod nvme_tcp 00:19:14.900 rmmod nvme_fabrics 00:19:14.900 rmmod nvme_keyring 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80477 ']' 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80477 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80477 ']' 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80477 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80477 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.900 killing process with pid 80477 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80477' 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80477 00:19:14.900 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80477 00:19:15.157 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:15.157 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:15.157 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:15.157 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:15.157 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:15.157 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:19:15.157 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:19:15.157 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.157 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:15.157 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:15.157 11:03:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:15.157 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:15.414 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.414 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:15.414 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:15.414 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:15.415 ************************************ 00:19:15.415 END TEST nvmf_host_multipath 00:19:15.415 ************************************ 00:19:15.415 00:19:15.415 real 1m1.654s 00:19:15.415 user 2m51.226s 00:19:15.415 sys 0m17.880s 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.415 11:03:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.673 ************************************ 00:19:15.673 START TEST nvmf_timeout 00:19:15.673 ************************************ 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:15.673 * Looking for test storage... 00:19:15.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:15.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.673 --rc genhtml_branch_coverage=1 00:19:15.673 --rc genhtml_function_coverage=1 00:19:15.673 --rc genhtml_legend=1 00:19:15.673 --rc geninfo_all_blocks=1 00:19:15.673 --rc geninfo_unexecuted_blocks=1 00:19:15.673 00:19:15.673 ' 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:15.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.673 --rc genhtml_branch_coverage=1 00:19:15.673 --rc genhtml_function_coverage=1 00:19:15.673 --rc genhtml_legend=1 00:19:15.673 --rc geninfo_all_blocks=1 00:19:15.673 --rc geninfo_unexecuted_blocks=1 00:19:15.673 00:19:15.673 ' 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:15.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.673 --rc genhtml_branch_coverage=1 00:19:15.673 --rc genhtml_function_coverage=1 00:19:15.673 --rc genhtml_legend=1 00:19:15.673 --rc geninfo_all_blocks=1 00:19:15.673 --rc geninfo_unexecuted_blocks=1 00:19:15.673 00:19:15.673 ' 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:15.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.673 --rc genhtml_branch_coverage=1 00:19:15.673 --rc genhtml_function_coverage=1 00:19:15.673 --rc genhtml_legend=1 00:19:15.673 --rc geninfo_all_blocks=1 00:19:15.673 --rc geninfo_unexecuted_blocks=1 00:19:15.673 00:19:15.673 ' 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.673 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.674 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:15.674 Cannot find device "nvmf_init_br" 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:15.674 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:15.933 Cannot find device "nvmf_init_br2" 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:15.933 Cannot find device "nvmf_tgt_br" 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.933 Cannot find device "nvmf_tgt_br2" 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:15.933 Cannot find device "nvmf_init_br" 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:15.933 Cannot find device "nvmf_init_br2" 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:15.933 Cannot find device "nvmf_tgt_br" 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:15.933 Cannot find device "nvmf_tgt_br2" 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:15.933 Cannot find device "nvmf_br" 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:15.933 Cannot find device "nvmf_init_if" 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:15.933 Cannot find device "nvmf_init_if2" 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:15.933 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:16.192 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:16.192 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:19:16.192 00:19:16.192 --- 10.0.0.3 ping statistics --- 00:19:16.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.192 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:16.192 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:16.192 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:19:16.192 00:19:16.192 --- 10.0.0.4 ping statistics --- 00:19:16.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.192 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:16.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:16.192 00:19:16.192 --- 10.0.0.1 ping statistics --- 00:19:16.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.192 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:16.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:19:16.192 00:19:16.192 --- 10.0.0.2 ping statistics --- 00:19:16.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.192 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81697 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81697 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81697 ']' 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.192 11:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:16.192 [2024-11-15 11:03:02.985627] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:19:16.192 [2024-11-15 11:03:02.985707] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.450 [2024-11-15 11:03:03.140825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:16.450 [2024-11-15 11:03:03.198501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.450 [2024-11-15 11:03:03.198588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.450 [2024-11-15 11:03:03.198608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.450 [2024-11-15 11:03:03.198619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.450 [2024-11-15 11:03:03.198628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.451 [2024-11-15 11:03:03.200057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.451 [2024-11-15 11:03:03.200083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.451 [2024-11-15 11:03:03.273317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:16.709 11:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.709 11:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:16.709 11:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:16.709 11:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:16.709 11:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:16.709 11:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.709 11:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:16.709 11:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:17.015 [2024-11-15 11:03:03.680669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.015 11:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:17.282 Malloc0 00:19:17.282 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:17.541 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:17.800 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:18.058 [2024-11-15 11:03:04.680799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:18.058 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:18.058 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81734 00:19:18.058 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81734 /var/tmp/bdevperf.sock 00:19:18.058 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81734 ']' 00:19:18.058 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.058 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.058 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.058 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.058 11:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:18.058 [2024-11-15 11:03:04.738337] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:19:18.058 [2024-11-15 11:03:04.738406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81734 ] 00:19:18.058 [2024-11-15 11:03:04.874798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.318 [2024-11-15 11:03:04.929999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.318 [2024-11-15 11:03:05.004030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:18.318 11:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.318 11:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:18.318 11:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:18.577 11:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:19.145 NVMe0n1 00:19:19.145 11:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:19.145 11:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81752 00:19:19.145 11:03:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:19.145 Running I/O for 10 seconds... 00:19:20.081 11:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:20.343 7266.00 IOPS, 28.38 MiB/s [2024-11-15T11:03:07.204Z] [2024-11-15 11:03:06.985733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.985800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.985831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.985846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.985857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.985865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.985875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.985884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.985894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.985903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.985913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.985922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.985932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.985940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.985950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.985959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.985969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.985979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.985989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.985998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.986008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.986017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.986028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.986037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.986048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:20.343 [2024-11-15 11:03:06.986057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.986067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.343 [2024-11-15 11:03:06.986075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.986098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1048280 is same with the state(6) to be set 00:19:20.343 [2024-11-15 11:03:06.986108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.343 [2024-11-15 11:03:06.986123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.343 [2024-11-15 11:03:06.986130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66424 len:8 PRP1 0x0 PRP2 0x0 00:19:20.343 [2024-11-15 11:03:06.986138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.986148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.343 [2024-11-15 11:03:06.986156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.343 [2024-11-15 11:03:06.986164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66656 len:8 PRP1 0x0 PRP2 0x0 00:19:20.343 [2024-11-15 11:03:06.986172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.986181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.343 [2024-11-15 11:03:06.986188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.343 [2024-11-15 11:03:06.986195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66664 len:8 PRP1 0x0 PRP2 0x0 00:19:20.343 [2024-11-15 11:03:06.986203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.986211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.343 [2024-11-15 11:03:06.986217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.343 [2024-11-15 11:03:06.986224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66672 len:8 PRP1 0x0 PRP2 0x0 00:19:20.343 [2024-11-15 11:03:06.986232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.986250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.343 [2024-11-15 11:03:06.986257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.343 [2024-11-15 11:03:06.986264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66680 len:8 PRP1 0x0 PRP2 0x0 00:19:20.343 [2024-11-15 11:03:06.986272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.343 [2024-11-15 11:03:06.986281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.343 [2024-11-15 11:03:06.986287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.343 [2024-11-15 11:03:06.986294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66688 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66696 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66704 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66720 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66728 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66736 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66744 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66752 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66760 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66768 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66776 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66784 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66792 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66800 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66808 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66816 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66824 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66832 len:8 PRP1 0x0 PRP2 0x0 00:19:20.344 [2024-11-15 11:03:06.986899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.344 [2024-11-15 11:03:06.986907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.344 [2024-11-15 11:03:06.986913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.344 [2024-11-15 11:03:06.986920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66840 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.986928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.986938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.986945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.986963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66848 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.986973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.986996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66856 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66864 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66872 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66880 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66888 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66896 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66904 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66912 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66920 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66928 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66936 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66944 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66952 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66960 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66968 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66976 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66984 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.345 [2024-11-15 11:03:06.987538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.345 [2024-11-15 11:03:06.987545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.345 [2024-11-15 11:03:06.987553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66992 len:8 PRP1 0x0 PRP2 0x0 00:19:20.345 [2024-11-15 11:03:06.987560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67000 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67008 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67016 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67024 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67032 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67040 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67048 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67056 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67064 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67072 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67080 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67088 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.987976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.987983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.987990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67096 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.987998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.988007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.988014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.988022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67104 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.988030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.988044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.988052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.988059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67112 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.988068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.988076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.988083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.988090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67120 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.988098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.988107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.988113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.988130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67128 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.988137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.988160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.988166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.988173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67136 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.988180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.988188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.988194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.988201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67144 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.988208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.988216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.988222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.346 [2024-11-15 11:03:06.988229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67152 len:8 PRP1 0x0 PRP2 0x0 00:19:20.346 [2024-11-15 11:03:06.988237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.346 [2024-11-15 11:03:06.988245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.346 [2024-11-15 11:03:06.988274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67160 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67168 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67176 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67184 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67192 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67200 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67208 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67216 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67224 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67232 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67240 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67248 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.988689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.988695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.988702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67256 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.988709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.995992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.996022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.996034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67264 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.996044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.996053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.996060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.996068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67272 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.996077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.996086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.996093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.996100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67280 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.996120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.996129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.996142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.996150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67288 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.996173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.996181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.996188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.996195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67296 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.996203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.996212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.996227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.996234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67304 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.996253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.996261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.996268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.996275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67312 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.996283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.996300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.996307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.347 [2024-11-15 11:03:06.996313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67320 len:8 PRP1 0x0 PRP2 0x0 00:19:20.347 [2024-11-15 11:03:06.996321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.347 [2024-11-15 11:03:06.996328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.347 [2024-11-15 11:03:06.996335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67328 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67336 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67344 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67352 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67360 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67368 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67376 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67384 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67392 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67400 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67408 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67416 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67424 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66432 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66440 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66448 len:8 PRP1 0x0 PRP2 0x0 00:19:20.348 [2024-11-15 11:03:06.996850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.348 [2024-11-15 11:03:06.996860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.348 [2024-11-15 11:03:06.996866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.348 [2024-11-15 11:03:06.996873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66456 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.996881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.996889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.996896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.996903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66464 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.996911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.996918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.996925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.996931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66472 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.996939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.996946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.996952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.996959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66480 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.996976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.996989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.997008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.997015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66488 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.997023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.997038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.997045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66496 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.997053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.997068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.997074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66504 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.997082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.997096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.997102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66512 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.997110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.997124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.997131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66520 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.997138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.997152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.997158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66528 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.997166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.997180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.997187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66536 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.997194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.997208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.997214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66544 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.997223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:20.349 [2024-11-15 11:03:06.997238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.349 [2024-11-15 11:03:06.997244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67432 len:8 PRP1 0x0 PRP2 0x0 00:19:20.349 [2024-11-15 11:03:06.997252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.349 [2024-11-15 11:03:06.997405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.349 [2024-11-15 11:03:06.997424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.349 [2024-11-15 11:03:06.997442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:20.349 [2024-11-15 11:03:06.997459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.349 [2024-11-15 11:03:06.997467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdae50 is same with the state(6) to be set 00:19:20.349 [2024-11-15 11:03:06.997686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:20.349 [2024-11-15 11:03:06.997712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfdae50 (9): Bad file descriptor 00:19:20.349 [2024-11-15 11:03:06.997789] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.349 [2024-11-15 11:03:06.997809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfdae50 with addr=10.0.0.3, port=4420 00:19:20.349 [2024-11-15 11:03:06.997820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdae50 is same with the state(6) to be set 00:19:20.349 [2024-11-15 11:03:06.997836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfdae50 (9): Bad file descriptor 00:19:20.349 [2024-11-15 11:03:06.997851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:20.349 [2024-11-15 11:03:06.997859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:20.349 [2024-11-15 11:03:06.997869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:20.349 [2024-11-15 11:03:06.997879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:20.349 [2024-11-15 11:03:06.997888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:20.349 11:03:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:22.222 4151.00 IOPS, 16.21 MiB/s [2024-11-15T11:03:09.083Z] 2767.33 IOPS, 10.81 MiB/s [2024-11-15T11:03:09.083Z] [2024-11-15 11:03:08.998104] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.222 [2024-11-15 11:03:08.998172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfdae50 with addr=10.0.0.3, port=4420 00:19:22.222 [2024-11-15 11:03:08.998194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdae50 is same with the state(6) to be set 00:19:22.222 [2024-11-15 11:03:08.998215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfdae50 (9): Bad file descriptor 00:19:22.222 [2024-11-15 11:03:08.998232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:22.222 [2024-11-15 11:03:08.998241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:22.222 [2024-11-15 11:03:08.998251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:22.222 [2024-11-15 11:03:08.998261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:22.222 [2024-11-15 11:03:08.998272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:22.222 11:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:22.222 11:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:22.222 11:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:22.481 11:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:22.481 11:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:22.481 11:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:22.481 11:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:22.740 11:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:22.740 11:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:24.376 2075.50 IOPS, 8.11 MiB/s [2024-11-15T11:03:11.237Z] 1660.40 IOPS, 6.49 MiB/s [2024-11-15T11:03:11.237Z] [2024-11-15 11:03:10.998534] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.376 [2024-11-15 11:03:10.998634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfdae50 with addr=10.0.0.3, port=4420 00:19:24.376 [2024-11-15 11:03:10.998652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdae50 is same with the state(6) to be set 00:19:24.376 [2024-11-15 11:03:10.998683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfdae50 (9): Bad file descriptor 00:19:24.376 [2024-11-15 11:03:10.998714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:24.376 [2024-11-15 11:03:10.998726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:24.376 [2024-11-15 11:03:10.998737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:24.376 [2024-11-15 11:03:10.998748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:24.376 [2024-11-15 11:03:10.998759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:26.250 1383.67 IOPS, 5.40 MiB/s [2024-11-15T11:03:13.111Z] 1186.00 IOPS, 4.63 MiB/s [2024-11-15T11:03:13.111Z] [2024-11-15 11:03:12.998842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:26.250 [2024-11-15 11:03:12.998920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:26.250 [2024-11-15 11:03:12.998943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:26.250 [2024-11-15 11:03:12.998964] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:26.250 [2024-11-15 11:03:12.998976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:27.187 1037.75 IOPS, 4.05 MiB/s 00:19:27.187 Latency(us) 00:19:27.187 [2024-11-15T11:03:14.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.187 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:27.187 Verification LBA range: start 0x0 length 0x4000 00:19:27.187 NVMe0n1 : 8.15 1018.94 3.98 15.71 0.00 123552.84 2904.44 7046430.72 00:19:27.187 [2024-11-15T11:03:14.048Z] =================================================================================================================== 00:19:27.187 [2024-11-15T11:03:14.048Z] Total : 1018.94 3.98 15.71 0.00 123552.84 2904.44 7046430.72 00:19:27.187 { 00:19:27.187 "results": [ 00:19:27.187 { 00:19:27.187 "job": "NVMe0n1", 00:19:27.187 "core_mask": "0x4", 00:19:27.187 "workload": "verify", 00:19:27.187 "status": "finished", 00:19:27.187 "verify_range": { 00:19:27.187 "start": 0, 00:19:27.187 "length": 16384 00:19:27.187 }, 00:19:27.187 "queue_depth": 128, 00:19:27.187 "io_size": 4096, 00:19:27.187 "runtime": 8.147651, 00:19:27.187 "iops": 1018.9439876597562, 00:19:27.187 "mibps": 3.9802499517959227, 00:19:27.187 "io_failed": 128, 00:19:27.187 "io_timeout": 0, 00:19:27.187 "avg_latency_us": 123552.84298069663, 00:19:27.187 "min_latency_us": 2904.4363636363637, 00:19:27.187 "max_latency_us": 7046430.72 00:19:27.187 } 00:19:27.187 ], 00:19:27.187 "core_count": 1 00:19:27.187 } 00:19:27.754 11:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:27.754 11:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:27.754 11:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:28.014 11:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:28.014 11:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:28.014 11:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:28.014 11:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81752 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81734 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81734 ']' 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81734 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81734 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:28.272 killing process with pid 81734 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81734' 00:19:28.272 Received shutdown signal, test time was about 9.256570 seconds 00:19:28.272 00:19:28.272 Latency(us) 00:19:28.272 [2024-11-15T11:03:15.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.272 [2024-11-15T11:03:15.133Z] =================================================================================================================== 00:19:28.272 [2024-11-15T11:03:15.133Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81734 00:19:28.272 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81734 00:19:28.531 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:28.790 [2024-11-15 11:03:15.609859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:28.790 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:28.790 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81873 00:19:28.790 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81873 /var/tmp/bdevperf.sock 00:19:28.790 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81873 ']' 00:19:28.790 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.790 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.790 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.790 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.790 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:29.049 [2024-11-15 11:03:15.665336] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:19:29.049 [2024-11-15 11:03:15.665422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81873 ] 00:19:29.049 [2024-11-15 11:03:15.800068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.049 [2024-11-15 11:03:15.854238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.308 [2024-11-15 11:03:15.927397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:29.308 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.308 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:29.308 11:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:29.567 11:03:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:29.825 NVMe0n1 00:19:29.825 11:03:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81889 00:19:29.825 11:03:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:29.825 11:03:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:29.825 Running I/O for 10 seconds... 00:19:30.762 11:03:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:31.023 7913.00 IOPS, 30.91 MiB/s [2024-11-15T11:03:17.884Z] [2024-11-15 11:03:17.826863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.023 [2024-11-15 11:03:17.826939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.023 [2024-11-15 11:03:17.826971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.023 [2024-11-15 11:03:17.826981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.023 [2024-11-15 11:03:17.826991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.023 [2024-11-15 11:03:17.827000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.023 [2024-11-15 11:03:17.827010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.023 [2024-11-15 11:03:17.827018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.023 [2024-11-15 11:03:17.827029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.023 [2024-11-15 11:03:17.827037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.023 [2024-11-15 11:03:17.827047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.023 [2024-11-15 11:03:17.827055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.023 [2024-11-15 11:03:17.827065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.023 [2024-11-15 11:03:17.827073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.023 [2024-11-15 11:03:17.827082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.024 [2024-11-15 11:03:17.827875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.024 [2024-11-15 11:03:17.827885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.025 [2024-11-15 11:03:17.827895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.827907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.025 [2024-11-15 11:03:17.827916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.827926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.025 [2024-11-15 11:03:17.827935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.827945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.827955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.827965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.827973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.827983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.827992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.025 [2024-11-15 11:03:17.828080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.025 [2024-11-15 11:03:17.828486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.025 [2024-11-15 11:03:17.828627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.025 [2024-11-15 11:03:17.828636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.026 [2024-11-15 11:03:17.828653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.828990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.828998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.026 [2024-11-15 11:03:17.829383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.026 [2024-11-15 11:03:17.829391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.027 [2024-11-15 11:03:17.829401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0280 is same with the state(6) to be set 00:19:31.027 [2024-11-15 11:03:17.829412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:31.027 [2024-11-15 11:03:17.829420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:31.027 [2024-11-15 11:03:17.829433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75328 len:8 PRP1 0x0 PRP2 0x0 00:19:31.027 [2024-11-15 11:03:17.829447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.027 [2024-11-15 11:03:17.829620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.027 [2024-11-15 11:03:17.829637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.027 [2024-11-15 11:03:17.829648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.027 [2024-11-15 11:03:17.829656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.027 [2024-11-15 11:03:17.829665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.027 [2024-11-15 11:03:17.829674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.027 [2024-11-15 11:03:17.829683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.027 [2024-11-15 11:03:17.829691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.027 [2024-11-15 11:03:17.829699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf42e50 is same with the state(6) to be set 00:19:31.027 [2024-11-15 11:03:17.829889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:31.027 [2024-11-15 11:03:17.829913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42e50 (9): Bad file descriptor 00:19:31.027 [2024-11-15 11:03:17.830029] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:31.027 [2024-11-15 11:03:17.830049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf42e50 with addr=10.0.0.3, port=4420 00:19:31.027 [2024-11-15 11:03:17.830059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf42e50 is same with the state(6) to be set 00:19:31.027 [2024-11-15 11:03:17.830075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42e50 (9): Bad file descriptor 00:19:31.027 [2024-11-15 11:03:17.830090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:31.027 [2024-11-15 11:03:17.830099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:31.027 [2024-11-15 11:03:17.830109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:31.027 [2024-11-15 11:03:17.830120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:31.027 [2024-11-15 11:03:17.830131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:31.027 11:03:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:32.223 4671.00 IOPS, 18.25 MiB/s [2024-11-15T11:03:19.084Z] [2024-11-15 11:03:18.830220] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:32.223 [2024-11-15 11:03:18.830269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf42e50 with addr=10.0.0.3, port=4420 00:19:32.223 [2024-11-15 11:03:18.830282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf42e50 is same with the state(6) to be set 00:19:32.223 [2024-11-15 11:03:18.830300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42e50 (9): Bad file descriptor 00:19:32.223 [2024-11-15 11:03:18.830315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:32.223 [2024-11-15 11:03:18.830324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:32.223 [2024-11-15 11:03:18.830333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:32.223 [2024-11-15 11:03:18.830342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:32.223 [2024-11-15 11:03:18.830352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:32.223 11:03:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:32.223 [2024-11-15 11:03:19.080387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:32.482 11:03:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81889 00:19:33.051 3114.00 IOPS, 12.16 MiB/s [2024-11-15T11:03:19.912Z] [2024-11-15 11:03:19.844481] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:34.981 2335.50 IOPS, 9.12 MiB/s [2024-11-15T11:03:22.778Z] 3480.20 IOPS, 13.59 MiB/s [2024-11-15T11:03:23.716Z] 4564.17 IOPS, 17.83 MiB/s [2024-11-15T11:03:25.093Z] 5338.43 IOPS, 20.85 MiB/s [2024-11-15T11:03:25.660Z] 5907.38 IOPS, 23.08 MiB/s [2024-11-15T11:03:27.038Z] 6321.00 IOPS, 24.69 MiB/s [2024-11-15T11:03:27.038Z] 6657.30 IOPS, 26.01 MiB/s 00:19:40.177 Latency(us) 00:19:40.177 [2024-11-15T11:03:27.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.177 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.177 Verification LBA range: start 0x0 length 0x4000 00:19:40.177 NVMe0n1 : 10.01 6662.42 26.03 0.00 0.00 19175.20 1541.59 3019898.88 00:19:40.177 [2024-11-15T11:03:27.038Z] =================================================================================================================== 00:19:40.177 [2024-11-15T11:03:27.038Z] Total : 6662.42 26.03 0.00 0.00 19175.20 1541.59 3019898.88 00:19:40.177 { 00:19:40.177 "results": [ 00:19:40.177 { 00:19:40.177 "job": "NVMe0n1", 00:19:40.177 "core_mask": "0x4", 00:19:40.177 "workload": "verify", 00:19:40.177 "status": "finished", 00:19:40.177 "verify_range": { 00:19:40.177 "start": 0, 00:19:40.177 "length": 16384 00:19:40.177 }, 00:19:40.177 "queue_depth": 128, 00:19:40.177 "io_size": 4096, 00:19:40.177 "runtime": 10.008522, 00:19:40.177 "iops": 6662.422283729806, 00:19:40.177 "mibps": 26.025087045819554, 00:19:40.177 "io_failed": 0, 00:19:40.177 "io_timeout": 0, 00:19:40.177 "avg_latency_us": 19175.19629366959, 00:19:40.177 "min_latency_us": 1541.5854545454545, 00:19:40.177 "max_latency_us": 3019898.88 00:19:40.177 } 00:19:40.177 ], 00:19:40.177 "core_count": 1 00:19:40.177 } 00:19:40.177 11:03:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81994 00:19:40.177 11:03:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:40.177 11:03:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:40.177 Running I/O for 10 seconds... 00:19:41.117 11:03:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:41.117 9031.00 IOPS, 35.28 MiB/s [2024-11-15T11:03:27.978Z] [2024-11-15 11:03:27.904739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.904809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.904831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.904841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.904852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.904861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.904871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.904880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.904890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.904899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.904918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.904927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.904936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.904945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.904955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.904964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.904975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.904983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.904993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.905002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.905021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.905040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.905059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.905077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.905096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.905115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.905133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.905157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.905176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.117 [2024-11-15 11:03:27.905200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.117 [2024-11-15 11:03:27.905228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.117 [2024-11-15 11:03:27.905238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.117 [2024-11-15 11:03:27.905246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.118 [2024-11-15 11:03:27.905791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.118 [2024-11-15 11:03:27.905954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.118 [2024-11-15 11:03:27.905963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.905972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.905982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.905990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.119 [2024-11-15 11:03:27.906205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.119 [2024-11-15 11:03:27.906603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.119 [2024-11-15 11:03:27.906611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.906629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.906647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.906674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.906693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.906712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.906739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.906758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.906776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.906794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.906811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.906828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.906846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.906864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.906888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.906917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.906934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.906952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.906970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.906987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.906997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.907005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.907023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.907048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.907065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.907082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.907100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.907117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.907135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:41.120 [2024-11-15 11:03:27.907154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.907171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.907195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.907212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.907229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.907247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.907264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:41.120 [2024-11-15 11:03:27.907283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.120 [2024-11-15 11:03:27.907311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:41.120 [2024-11-15 11:03:27.907320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:41.120 [2024-11-15 11:03:27.907328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79536 len:8 PRP1 0x0 PRP2 0x0 00:19:41.121 [2024-11-15 11:03:27.907337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:41.121 [2024-11-15 11:03:27.907619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:41.121 [2024-11-15 11:03:27.907705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42e50 (9): Bad file descriptor 00:19:41.121 [2024-11-15 11:03:27.907819] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.121 [2024-11-15 11:03:27.907867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf42e50 with addr=10.0.0.3, port=4420 00:19:41.121 [2024-11-15 11:03:27.907879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf42e50 is same with the state(6) to be set 00:19:41.121 [2024-11-15 11:03:27.907897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42e50 (9): Bad file descriptor 00:19:41.121 [2024-11-15 11:03:27.907913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:41.121 [2024-11-15 11:03:27.907923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:41.121 [2024-11-15 11:03:27.907932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:41.121 [2024-11-15 11:03:27.907943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:41.121 [2024-11-15 11:03:27.907954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:41.121 11:03:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:42.058 4907.50 IOPS, 19.17 MiB/s [2024-11-15T11:03:28.919Z] [2024-11-15 11:03:28.908099] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.058 [2024-11-15 11:03:28.908177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf42e50 with addr=10.0.0.3, port=4420 00:19:42.058 [2024-11-15 11:03:28.908193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf42e50 is same with the state(6) to be set 00:19:42.058 [2024-11-15 11:03:28.908225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42e50 (9): Bad file descriptor 00:19:42.058 [2024-11-15 11:03:28.908245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:42.058 [2024-11-15 11:03:28.908261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:42.058 [2024-11-15 11:03:28.908272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:42.058 [2024-11-15 11:03:28.908282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:42.058 [2024-11-15 11:03:28.908295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:43.252 3271.67 IOPS, 12.78 MiB/s [2024-11-15T11:03:30.113Z] [2024-11-15 11:03:29.908457] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.252 [2024-11-15 11:03:29.908561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf42e50 with addr=10.0.0.3, port=4420 00:19:43.252 [2024-11-15 11:03:29.908577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf42e50 is same with the state(6) to be set 00:19:43.252 [2024-11-15 11:03:29.908602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42e50 (9): Bad file descriptor 00:19:43.252 [2024-11-15 11:03:29.908622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:43.252 [2024-11-15 11:03:29.908632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:43.252 [2024-11-15 11:03:29.908643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:43.252 [2024-11-15 11:03:29.908655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:43.252 [2024-11-15 11:03:29.908666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:44.190 2453.75 IOPS, 9.58 MiB/s [2024-11-15T11:03:31.051Z] [2024-11-15 11:03:30.911730] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.190 [2024-11-15 11:03:30.911810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf42e50 with addr=10.0.0.3, port=4420 00:19:44.190 [2024-11-15 11:03:30.911827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf42e50 is same with the state(6) to be set 00:19:44.190 [2024-11-15 11:03:30.912072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf42e50 (9): Bad file descriptor 00:19:44.190 [2024-11-15 11:03:30.912294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:44.190 [2024-11-15 11:03:30.912307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:44.190 [2024-11-15 11:03:30.912318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:44.190 [2024-11-15 11:03:30.912329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:44.190 [2024-11-15 11:03:30.912340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:44.190 11:03:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:44.449 [2024-11-15 11:03:31.187283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:44.449 11:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81994 00:19:45.274 1963.00 IOPS, 7.67 MiB/s [2024-11-15T11:03:32.135Z] [2024-11-15 11:03:31.940199] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:47.147 2977.83 IOPS, 11.63 MiB/s [2024-11-15T11:03:34.944Z] 3882.00 IOPS, 15.16 MiB/s [2024-11-15T11:03:35.885Z] 4558.12 IOPS, 17.81 MiB/s [2024-11-15T11:03:37.261Z] 5083.89 IOPS, 19.86 MiB/s [2024-11-15T11:03:37.261Z] 5507.00 IOPS, 21.51 MiB/s 00:19:50.400 Latency(us) 00:19:50.400 [2024-11-15T11:03:37.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.400 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.400 Verification LBA range: start 0x0 length 0x4000 00:19:50.400 NVMe0n1 : 10.01 5513.21 21.54 4433.39 0.00 12847.68 662.81 3019898.88 00:19:50.400 [2024-11-15T11:03:37.261Z] =================================================================================================================== 00:19:50.400 [2024-11-15T11:03:37.261Z] Total : 5513.21 21.54 4433.39 0.00 12847.68 0.00 3019898.88 00:19:50.400 { 00:19:50.400 "results": [ 00:19:50.400 { 00:19:50.400 "job": "NVMe0n1", 00:19:50.400 "core_mask": "0x4", 00:19:50.400 "workload": "verify", 00:19:50.400 "status": "finished", 00:19:50.400 "verify_range": { 00:19:50.400 "start": 0, 00:19:50.400 "length": 16384 00:19:50.400 }, 00:19:50.400 "queue_depth": 128, 00:19:50.400 "io_size": 4096, 00:19:50.400 "runtime": 10.009053, 00:19:50.400 "iops": 5513.208891990082, 00:19:50.400 "mibps": 21.535972234336256, 00:19:50.400 "io_failed": 44374, 00:19:50.400 "io_timeout": 0, 00:19:50.400 "avg_latency_us": 12847.675812096617, 00:19:50.400 "min_latency_us": 662.8072727272727, 00:19:50.400 "max_latency_us": 3019898.88 00:19:50.400 } 00:19:50.400 ], 00:19:50.400 "core_count": 1 00:19:50.400 } 00:19:50.400 11:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81873 00:19:50.400 11:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81873 ']' 00:19:50.400 11:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81873 00:19:50.400 11:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:50.400 11:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.400 11:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81873 00:19:50.400 killing process with pid 81873 00:19:50.400 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.400 00:19:50.400 Latency(us) 00:19:50.400 [2024-11-15T11:03:37.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.400 [2024-11-15T11:03:37.261Z] =================================================================================================================== 00:19:50.400 [2024-11-15T11:03:37.261Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.400 11:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.400 11:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.400 11:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81873' 00:19:50.400 11:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81873 00:19:50.400 11:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81873 00:19:50.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.400 11:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82108 00:19:50.400 11:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:50.400 11:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82108 /var/tmp/bdevperf.sock 00:19:50.400 11:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82108 ']' 00:19:50.400 11:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.400 11:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.400 11:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.400 11:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.400 11:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:50.400 [2024-11-15 11:03:37.194125] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:19:50.400 [2024-11-15 11:03:37.194425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82108 ] 00:19:50.657 [2024-11-15 11:03:37.341418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.658 [2024-11-15 11:03:37.390778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.658 [2024-11-15 11:03:37.465165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:51.590 11:03:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.590 11:03:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:51.590 11:03:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82124 00:19:51.590 11:03:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:51.590 11:03:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82108 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:51.590 11:03:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:52.158 NVMe0n1 00:19:52.158 11:03:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82170 00:19:52.158 11:03:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:52.158 11:03:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:52.158 Running I/O for 10 seconds... 00:19:53.095 11:03:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:53.357 17145.00 IOPS, 66.97 MiB/s [2024-11-15T11:03:40.218Z] [2024-11-15 11:03:40.036216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.357 [2024-11-15 11:03:40.036479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.357 [2024-11-15 11:03:40.036629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.357 [2024-11-15 11:03:40.036643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.357 [2024-11-15 11:03:40.036651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.357 [2024-11-15 11:03:40.036659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.357 [2024-11-15 11:03:40.036675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.357 [2024-11-15 11:03:40.036684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.357 [2024-11-15 11:03:40.036691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.357 [2024-11-15 11:03:40.036700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.357 [2024-11-15 11:03:40.036708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.357 [2024-11-15 11:03:40.036737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.036993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.358 [2024-11-15 11:03:40.037420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xac3aa0 is same with the state(6) to be set 00:19:53.359 [2024-11-15 11:03:40.037702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.037987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.037996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.038014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.038031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.038052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.038070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.038090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.038108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.038127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.038145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.038163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.038181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.359 [2024-11-15 11:03:40.038198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.359 [2024-11-15 11:03:40.038206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.038986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.038994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.360 [2024-11-15 11:03:40.039004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.360 [2024-11-15 11:03:40.039012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.361 [2024-11-15 11:03:40.039776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.361 [2024-11-15 11:03:40.039784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.039794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.039803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.039813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.039821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.039839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.039849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.039862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.039870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.039880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.039888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.039898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.039907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.039923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.039936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.039946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.039954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.039970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.039978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.039988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.039997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.040015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.040032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.040050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.040068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.040086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.040104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.040122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.040140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.040157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.040181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.362 [2024-11-15 11:03:40.040214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda3140 is same with the state(6) to be set 00:19:53.362 [2024-11-15 11:03:40.040234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:53.362 [2024-11-15 11:03:40.040248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:53.362 [2024-11-15 11:03:40.040255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62200 len:8 PRP1 0x0 PRP2 0x0 00:19:53.362 [2024-11-15 11:03:40.040263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.362 [2024-11-15 11:03:40.040652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:53.362 [2024-11-15 11:03:40.040916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd35e50 (9): Bad file descriptor 00:19:53.362 [2024-11-15 11:03:40.041053] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.362 [2024-11-15 11:03:40.041078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd35e50 with addr=10.0.0.3, port=4420 00:19:53.362 [2024-11-15 11:03:40.041090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd35e50 is same with the state(6) to be set 00:19:53.362 [2024-11-15 11:03:40.041109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd35e50 (9): Bad file descriptor 00:19:53.362 [2024-11-15 11:03:40.041125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:53.362 [2024-11-15 11:03:40.041135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:53.362 [2024-11-15 11:03:40.041146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:53.362 [2024-11-15 11:03:40.041156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:53.362 [2024-11-15 11:03:40.041166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:53.362 11:03:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82170 00:19:55.284 9653.00 IOPS, 37.71 MiB/s [2024-11-15T11:03:42.145Z] 6435.33 IOPS, 25.14 MiB/s [2024-11-15T11:03:42.145Z] [2024-11-15 11:03:42.041387] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.284 [2024-11-15 11:03:42.041473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd35e50 with addr=10.0.0.3, port=4420 00:19:55.284 [2024-11-15 11:03:42.041501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd35e50 is same with the state(6) to be set 00:19:55.284 [2024-11-15 11:03:42.041541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd35e50 (9): Bad file descriptor 00:19:55.284 [2024-11-15 11:03:42.041569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:55.284 [2024-11-15 11:03:42.041579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:55.284 [2024-11-15 11:03:42.041592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:55.284 [2024-11-15 11:03:42.041603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:55.284 [2024-11-15 11:03:42.041617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:57.185 4826.50 IOPS, 18.85 MiB/s [2024-11-15T11:03:44.046Z] 3861.20 IOPS, 15.08 MiB/s [2024-11-15T11:03:44.046Z] [2024-11-15 11:03:44.041828] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.185 [2024-11-15 11:03:44.042263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd35e50 with addr=10.0.0.3, port=4420 00:19:57.185 [2024-11-15 11:03:44.042290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd35e50 is same with the state(6) to be set 00:19:57.185 [2024-11-15 11:03:44.042331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd35e50 (9): Bad file descriptor 00:19:57.185 [2024-11-15 11:03:44.042354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:57.185 [2024-11-15 11:03:44.042364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:57.186 [2024-11-15 11:03:44.042386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:57.186 [2024-11-15 11:03:44.042399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:57.186 [2024-11-15 11:03:44.042411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:59.059 3217.67 IOPS, 12.57 MiB/s [2024-11-15T11:03:46.179Z] 2758.00 IOPS, 10.77 MiB/s [2024-11-15T11:03:46.180Z] [2024-11-15 11:03:46.042499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:59.319 [2024-11-15 11:03:46.042560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:59.319 [2024-11-15 11:03:46.042585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:59.319 [2024-11-15 11:03:46.042595] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:19:59.319 [2024-11-15 11:03:46.042606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:00.257 2413.25 IOPS, 9.43 MiB/s 00:20:00.257 Latency(us) 00:20:00.257 [2024-11-15T11:03:47.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.257 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:00.257 NVMe0n1 : 8.14 2373.05 9.27 15.73 0.00 53497.70 7119.59 7015926.69 00:20:00.257 [2024-11-15T11:03:47.118Z] =================================================================================================================== 00:20:00.257 [2024-11-15T11:03:47.118Z] Total : 2373.05 9.27 15.73 0.00 53497.70 7119.59 7015926.69 00:20:00.257 { 00:20:00.257 "results": [ 00:20:00.257 { 00:20:00.257 "job": "NVMe0n1", 00:20:00.257 "core_mask": "0x4", 00:20:00.257 "workload": "randread", 00:20:00.257 "status": "finished", 00:20:00.257 "queue_depth": 128, 00:20:00.257 "io_size": 4096, 00:20:00.257 "runtime": 8.13552, 00:20:00.257 "iops": 2373.0505241213837, 00:20:00.257 "mibps": 9.269728609849155, 00:20:00.257 "io_failed": 128, 00:20:00.257 "io_timeout": 0, 00:20:00.257 "avg_latency_us": 53497.700915546324, 00:20:00.257 "min_latency_us": 7119.592727272728, 00:20:00.257 "max_latency_us": 7015926.69090909 00:20:00.257 } 00:20:00.257 ], 00:20:00.257 "core_count": 1 00:20:00.257 } 00:20:00.257 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:00.257 Attaching 5 probes... 00:20:00.257 1342.928112: reset bdev controller NVMe0 00:20:00.257 1343.253766: reconnect bdev controller NVMe0 00:20:00.257 3343.514486: reconnect delay bdev controller NVMe0 00:20:00.257 3343.540511: reconnect bdev controller NVMe0 00:20:00.257 5343.954889: reconnect delay bdev controller NVMe0 00:20:00.257 5343.980429: reconnect bdev controller NVMe0 00:20:00.257 7344.759100: reconnect delay bdev controller NVMe0 00:20:00.257 7344.777939: reconnect bdev controller NVMe0 00:20:00.257 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:00.257 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:00.257 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82124 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82108 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82108 ']' 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82108 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82108 00:20:00.258 killing process with pid 82108 00:20:00.258 Received shutdown signal, test time was about 8.210113 seconds 00:20:00.258 00:20:00.258 Latency(us) 00:20:00.258 [2024-11-15T11:03:47.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.258 [2024-11-15T11:03:47.119Z] =================================================================================================================== 00:20:00.258 [2024-11-15T11:03:47.119Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82108' 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82108 00:20:00.258 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82108 00:20:00.516 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:01.084 rmmod nvme_tcp 00:20:01.084 rmmod nvme_fabrics 00:20:01.084 rmmod nvme_keyring 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81697 ']' 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81697 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81697 ']' 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81697 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81697 00:20:01.084 killing process with pid 81697 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81697' 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81697 00:20:01.084 11:03:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81697 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:01.343 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:01.603 00:20:01.603 real 0m45.986s 00:20:01.603 user 2m14.477s 00:20:01.603 sys 0m5.540s 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:01.603 ************************************ 00:20:01.603 END TEST nvmf_timeout 00:20:01.603 ************************************ 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:01.603 ************************************ 00:20:01.603 END TEST nvmf_host 00:20:01.603 ************************************ 00:20:01.603 00:20:01.603 real 5m0.479s 00:20:01.603 user 13m3.042s 00:20:01.603 sys 1m10.123s 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.603 11:03:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.603 11:03:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:01.603 11:03:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:01.603 ************************************ 00:20:01.603 END TEST nvmf_tcp 00:20:01.603 ************************************ 00:20:01.603 00:20:01.603 real 12m31.526s 00:20:01.603 user 30m5.215s 00:20:01.603 sys 3m13.165s 00:20:01.603 11:03:48 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.603 11:03:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.603 11:03:48 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:20:01.603 11:03:48 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:01.603 11:03:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:01.603 11:03:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.603 11:03:48 -- common/autotest_common.sh@10 -- # set +x 00:20:01.603 ************************************ 00:20:01.603 START TEST nvmf_dif 00:20:01.603 ************************************ 00:20:01.603 11:03:48 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:01.864 * Looking for test storage... 00:20:01.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:01.864 11:03:48 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:01.864 11:03:48 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:01.864 11:03:48 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:20:01.864 11:03:48 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:01.864 11:03:48 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.864 11:03:48 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:01.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.864 --rc genhtml_branch_coverage=1 00:20:01.864 --rc genhtml_function_coverage=1 00:20:01.864 --rc genhtml_legend=1 00:20:01.864 --rc geninfo_all_blocks=1 00:20:01.864 --rc geninfo_unexecuted_blocks=1 00:20:01.864 00:20:01.864 ' 00:20:01.864 11:03:48 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:01.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.864 --rc genhtml_branch_coverage=1 00:20:01.864 --rc genhtml_function_coverage=1 00:20:01.864 --rc genhtml_legend=1 00:20:01.864 --rc geninfo_all_blocks=1 00:20:01.864 --rc geninfo_unexecuted_blocks=1 00:20:01.864 00:20:01.864 ' 00:20:01.864 11:03:48 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:01.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.864 --rc genhtml_branch_coverage=1 00:20:01.864 --rc genhtml_function_coverage=1 00:20:01.864 --rc genhtml_legend=1 00:20:01.864 --rc geninfo_all_blocks=1 00:20:01.864 --rc geninfo_unexecuted_blocks=1 00:20:01.864 00:20:01.864 ' 00:20:01.864 11:03:48 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:01.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.864 --rc genhtml_branch_coverage=1 00:20:01.864 --rc genhtml_function_coverage=1 00:20:01.864 --rc genhtml_legend=1 00:20:01.864 --rc geninfo_all_blocks=1 00:20:01.864 --rc geninfo_unexecuted_blocks=1 00:20:01.864 00:20:01.864 ' 00:20:01.864 11:03:48 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.864 11:03:48 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.864 11:03:48 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.864 11:03:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.865 11:03:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.865 11:03:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.865 11:03:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:01.865 11:03:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:01.865 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:01.865 11:03:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:01.865 11:03:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:01.865 11:03:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:01.865 11:03:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:01.865 11:03:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.865 11:03:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:01.865 11:03:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:01.865 Cannot find device "nvmf_init_br" 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:01.865 Cannot find device "nvmf_init_br2" 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:01.865 Cannot find device "nvmf_tgt_br" 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:01.865 Cannot find device "nvmf_tgt_br2" 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:01.865 Cannot find device "nvmf_init_br" 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:01.865 11:03:48 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:02.124 Cannot find device "nvmf_init_br2" 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:02.124 Cannot find device "nvmf_tgt_br" 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:02.124 Cannot find device "nvmf_tgt_br2" 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:02.124 Cannot find device "nvmf_br" 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:02.124 Cannot find device "nvmf_init_if" 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:02.124 Cannot find device "nvmf_init_if2" 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:02.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:02.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:02.124 11:03:48 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:02.383 11:03:48 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:02.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:02.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:20:02.383 00:20:02.383 --- 10.0.0.3 ping statistics --- 00:20:02.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.383 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:02.383 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:02.383 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:20:02.383 00:20:02.383 --- 10.0.0.4 ping statistics --- 00:20:02.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.383 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:02.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:20:02.383 00:20:02.383 --- 10.0.0.1 ping statistics --- 00:20:02.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.383 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:02.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:20:02.383 00:20:02.383 --- 10.0.0.2 ping statistics --- 00:20:02.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.383 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:02.383 11:03:49 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:02.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:02.642 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:02.642 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:02.642 11:03:49 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.642 11:03:49 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:02.642 11:03:49 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:02.642 11:03:49 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.642 11:03:49 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:02.642 11:03:49 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:02.642 11:03:49 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:02.642 11:03:49 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:02.642 11:03:49 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:02.642 11:03:49 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.642 11:03:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:02.642 11:03:49 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82655 00:20:02.642 11:03:49 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82655 00:20:02.642 11:03:49 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82655 ']' 00:20:02.642 11:03:49 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:02.642 11:03:49 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.642 11:03:49 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.642 11:03:49 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.642 11:03:49 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.642 11:03:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:02.901 [2024-11-15 11:03:49.560630] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:20:02.901 [2024-11-15 11:03:49.560723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.901 [2024-11-15 11:03:49.715440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.160 [2024-11-15 11:03:49.783465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.160 [2024-11-15 11:03:49.783776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.160 [2024-11-15 11:03:49.783803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.160 [2024-11-15 11:03:49.783815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.160 [2024-11-15 11:03:49.783825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.160 [2024-11-15 11:03:49.784272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.160 [2024-11-15 11:03:49.858266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:03.160 11:03:49 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.160 11:03:49 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:20:03.160 11:03:49 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:03.160 11:03:49 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:03.160 11:03:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:03.160 11:03:49 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.160 11:03:49 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:03.160 11:03:49 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:03.160 11:03:49 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.160 11:03:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:03.160 [2024-11-15 11:03:49.985143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.160 11:03:49 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.160 11:03:49 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:03.160 11:03:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:03.160 11:03:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.160 11:03:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:03.160 ************************************ 00:20:03.160 START TEST fio_dif_1_default 00:20:03.160 ************************************ 00:20:03.160 11:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:20:03.160 11:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:03.160 11:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:03.160 11:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:03.160 11:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:03.160 11:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:03.160 11:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:03.160 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.160 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:03.160 bdev_null0 00:20:03.160 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.160 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:03.160 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.160 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:03.160 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.160 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:03.160 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.160 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:03.420 [2024-11-15 11:03:50.029337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:03.420 { 00:20:03.420 "params": { 00:20:03.420 "name": "Nvme$subsystem", 00:20:03.420 "trtype": "$TEST_TRANSPORT", 00:20:03.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.420 "adrfam": "ipv4", 00:20:03.420 "trsvcid": "$NVMF_PORT", 00:20:03.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.420 "hdgst": ${hdgst:-false}, 00:20:03.420 "ddgst": ${ddgst:-false} 00:20:03.420 }, 00:20:03.420 "method": "bdev_nvme_attach_controller" 00:20:03.420 } 00:20:03.420 EOF 00:20:03.420 )") 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:03.420 "params": { 00:20:03.420 "name": "Nvme0", 00:20:03.420 "trtype": "tcp", 00:20:03.420 "traddr": "10.0.0.3", 00:20:03.420 "adrfam": "ipv4", 00:20:03.420 "trsvcid": "4420", 00:20:03.420 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:03.420 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:03.420 "hdgst": false, 00:20:03.420 "ddgst": false 00:20:03.420 }, 00:20:03.420 "method": "bdev_nvme_attach_controller" 00:20:03.420 }' 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:03.420 11:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:03.420 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:03.420 fio-3.35 00:20:03.420 Starting 1 thread 00:20:15.628 00:20:15.628 filename0: (groupid=0, jobs=1): err= 0: pid=82714: Fri Nov 15 11:04:00 2024 00:20:15.628 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(409MiB/10001msec) 00:20:15.628 slat (usec): min=5, max=222, avg= 7.26, stdev= 2.15 00:20:15.628 clat (usec): min=316, max=3318, avg=360.03, stdev=30.21 00:20:15.628 lat (usec): min=322, max=3355, avg=367.29, stdev=30.74 00:20:15.628 clat percentiles (usec): 00:20:15.628 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:20:15.628 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 363], 00:20:15.628 | 70.00th=[ 367], 80.00th=[ 371], 90.00th=[ 379], 95.00th=[ 388], 00:20:15.628 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 478], 99.95th=[ 506], 00:20:15.628 | 99.99th=[ 1434] 00:20:15.628 bw ( KiB/s): min=38944, max=42336, per=100.00%, avg=41936.84, stdev=752.77, samples=19 00:20:15.628 iops : min= 9736, max=10584, avg=10484.21, stdev=188.19, samples=19 00:20:15.628 lat (usec) : 500=99.94%, 750=0.04% 00:20:15.628 lat (msec) : 2=0.01%, 4=0.01% 00:20:15.628 cpu : usr=84.58%, sys=13.54%, ctx=89, majf=0, minf=9 00:20:15.628 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:15.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.628 issued rwts: total=104804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.628 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:15.628 00:20:15.628 Run status group 0 (all jobs): 00:20:15.628 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=409MiB (429MB), run=10001-10001msec 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:15.628 ************************************ 00:20:15.628 END TEST fio_dif_1_default 00:20:15.628 ************************************ 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.628 00:20:15.628 real 0m11.058s 00:20:15.628 user 0m9.143s 00:20:15.628 sys 0m1.648s 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:15.628 11:04:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:15.628 11:04:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:15.628 11:04:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.628 11:04:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:15.628 ************************************ 00:20:15.628 START TEST fio_dif_1_multi_subsystems 00:20:15.628 ************************************ 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:15.628 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:15.629 bdev_null0 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:15.629 [2024-11-15 11:04:01.141945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:15.629 bdev_null1 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:15.629 { 00:20:15.629 "params": { 00:20:15.629 "name": "Nvme$subsystem", 00:20:15.629 "trtype": "$TEST_TRANSPORT", 00:20:15.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.629 "adrfam": "ipv4", 00:20:15.629 "trsvcid": "$NVMF_PORT", 00:20:15.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.629 "hdgst": ${hdgst:-false}, 00:20:15.629 "ddgst": ${ddgst:-false} 00:20:15.629 }, 00:20:15.629 "method": "bdev_nvme_attach_controller" 00:20:15.629 } 00:20:15.629 EOF 00:20:15.629 )") 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:15.629 { 00:20:15.629 "params": { 00:20:15.629 "name": "Nvme$subsystem", 00:20:15.629 "trtype": "$TEST_TRANSPORT", 00:20:15.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.629 "adrfam": "ipv4", 00:20:15.629 "trsvcid": "$NVMF_PORT", 00:20:15.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.629 "hdgst": ${hdgst:-false}, 00:20:15.629 "ddgst": ${ddgst:-false} 00:20:15.629 }, 00:20:15.629 "method": "bdev_nvme_attach_controller" 00:20:15.629 } 00:20:15.629 EOF 00:20:15.629 )") 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:20:15.629 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:15.629 "params": { 00:20:15.629 "name": "Nvme0", 00:20:15.629 "trtype": "tcp", 00:20:15.629 "traddr": "10.0.0.3", 00:20:15.629 "adrfam": "ipv4", 00:20:15.629 "trsvcid": "4420", 00:20:15.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:15.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:15.630 "hdgst": false, 00:20:15.630 "ddgst": false 00:20:15.630 }, 00:20:15.630 "method": "bdev_nvme_attach_controller" 00:20:15.630 },{ 00:20:15.630 "params": { 00:20:15.630 "name": "Nvme1", 00:20:15.630 "trtype": "tcp", 00:20:15.630 "traddr": "10.0.0.3", 00:20:15.630 "adrfam": "ipv4", 00:20:15.630 "trsvcid": "4420", 00:20:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.630 "hdgst": false, 00:20:15.630 "ddgst": false 00:20:15.630 }, 00:20:15.630 "method": "bdev_nvme_attach_controller" 00:20:15.630 }' 00:20:15.630 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:15.630 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:15.630 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:15.630 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.630 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:15.630 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:15.630 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:15.630 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:15.630 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:15.630 11:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.630 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:15.630 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:15.630 fio-3.35 00:20:15.630 Starting 2 threads 00:20:25.652 00:20:25.652 filename0: (groupid=0, jobs=1): err= 0: pid=82879: Fri Nov 15 11:04:12 2024 00:20:25.652 read: IOPS=5326, BW=20.8MiB/s (21.8MB/s)(208MiB/10001msec) 00:20:25.652 slat (usec): min=6, max=332, avg=14.55, stdev= 7.39 00:20:25.652 clat (usec): min=434, max=3913, avg=711.50, stdev=59.03 00:20:25.652 lat (usec): min=452, max=3952, avg=726.05, stdev=60.76 00:20:25.652 clat percentiles (usec): 00:20:25.652 | 1.00th=[ 603], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 676], 00:20:25.652 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[ 709], 60.00th=[ 717], 00:20:25.652 | 70.00th=[ 734], 80.00th=[ 750], 90.00th=[ 775], 95.00th=[ 799], 00:20:25.652 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 1012], 99.95th=[ 1123], 00:20:25.652 | 99.99th=[ 1336] 00:20:25.652 bw ( KiB/s): min=19296, max=21920, per=49.93%, avg=21293.47, stdev=783.57, samples=19 00:20:25.652 iops : min= 4824, max= 5480, avg=5323.37, stdev=195.89, samples=19 00:20:25.652 lat (usec) : 500=0.05%, 750=82.25%, 1000=17.59% 00:20:25.652 lat (msec) : 2=0.11%, 4=0.01% 00:20:25.652 cpu : usr=89.45%, sys=8.73%, ctx=190, majf=0, minf=0 00:20:25.652 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.652 issued rwts: total=53268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.652 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:25.652 filename1: (groupid=0, jobs=1): err= 0: pid=82880: Fri Nov 15 11:04:12 2024 00:20:25.652 read: IOPS=5334, BW=20.8MiB/s (21.8MB/s)(208MiB/10001msec) 00:20:25.652 slat (usec): min=5, max=101, avg=15.12, stdev= 7.58 00:20:25.652 clat (usec): min=352, max=1314, avg=706.66, stdev=45.84 00:20:25.652 lat (usec): min=360, max=1341, avg=721.79, stdev=48.63 00:20:25.652 clat percentiles (usec): 00:20:25.652 | 1.00th=[ 627], 5.00th=[ 652], 10.00th=[ 660], 20.00th=[ 676], 00:20:25.652 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 709], 00:20:25.652 | 70.00th=[ 725], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 791], 00:20:25.652 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 938], 99.95th=[ 955], 00:20:25.652 | 99.99th=[ 1004] 00:20:25.652 bw ( KiB/s): min=19360, max=21920, per=50.01%, avg=21327.16, stdev=762.82, samples=19 00:20:25.652 iops : min= 4840, max= 5480, avg=5331.79, stdev=190.71, samples=19 00:20:25.652 lat (usec) : 500=0.14%, 750=86.01%, 1000=13.84% 00:20:25.652 lat (msec) : 2=0.01% 00:20:25.652 cpu : usr=89.87%, sys=8.54%, ctx=13, majf=0, minf=0 00:20:25.652 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.652 issued rwts: total=53348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.652 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:25.652 00:20:25.652 Run status group 0 (all jobs): 00:20:25.652 READ: bw=41.6MiB/s (43.7MB/s), 20.8MiB/s-20.8MiB/s (21.8MB/s-21.8MB/s), io=416MiB (437MB), run=10001-10001msec 00:20:25.652 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:25.652 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:25.652 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:25.652 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:25.652 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:25.652 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:25.652 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.652 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:25.652 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.652 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:25.653 ************************************ 00:20:25.653 END TEST fio_dif_1_multi_subsystems 00:20:25.653 ************************************ 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.653 00:20:25.653 real 0m11.187s 00:20:25.653 user 0m18.699s 00:20:25.653 sys 0m2.057s 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.653 11:04:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:25.653 11:04:12 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:25.653 11:04:12 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:25.653 11:04:12 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.653 11:04:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:25.653 ************************************ 00:20:25.653 START TEST fio_dif_rand_params 00:20:25.653 ************************************ 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.653 bdev_null0 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:25.653 [2024-11-15 11:04:12.375469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:25.653 { 00:20:25.653 "params": { 00:20:25.653 "name": "Nvme$subsystem", 00:20:25.653 "trtype": "$TEST_TRANSPORT", 00:20:25.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.653 "adrfam": "ipv4", 00:20:25.653 "trsvcid": "$NVMF_PORT", 00:20:25.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.653 "hdgst": ${hdgst:-false}, 00:20:25.653 "ddgst": ${ddgst:-false} 00:20:25.653 }, 00:20:25.653 "method": "bdev_nvme_attach_controller" 00:20:25.653 } 00:20:25.653 EOF 00:20:25.653 )") 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:25.653 "params": { 00:20:25.653 "name": "Nvme0", 00:20:25.653 "trtype": "tcp", 00:20:25.653 "traddr": "10.0.0.3", 00:20:25.653 "adrfam": "ipv4", 00:20:25.653 "trsvcid": "4420", 00:20:25.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.653 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:25.653 "hdgst": false, 00:20:25.653 "ddgst": false 00:20:25.653 }, 00:20:25.653 "method": "bdev_nvme_attach_controller" 00:20:25.653 }' 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:25.653 11:04:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.912 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:25.912 ... 00:20:25.912 fio-3.35 00:20:25.912 Starting 3 threads 00:20:32.479 00:20:32.479 filename0: (groupid=0, jobs=1): err= 0: pid=83037: Fri Nov 15 11:04:18 2024 00:20:32.479 read: IOPS=294, BW=36.9MiB/s (38.7MB/s)(185MiB/5004msec) 00:20:32.479 slat (nsec): min=5442, max=87763, avg=24508.36, stdev=13238.00 00:20:32.479 clat (usec): min=7260, max=11094, avg=10112.43, stdev=217.07 00:20:32.479 lat (usec): min=7273, max=11112, avg=10136.93, stdev=218.30 00:20:32.479 clat percentiles (usec): 00:20:32.479 | 1.00th=[ 9896], 5.00th=[ 9896], 10.00th=[ 9896], 20.00th=[10028], 00:20:32.479 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10028], 60.00th=[10159], 00:20:32.479 | 70.00th=[10159], 80.00th=[10159], 90.00th=[10421], 95.00th=[10552], 00:20:32.479 | 99.00th=[10683], 99.50th=[10814], 99.90th=[11076], 99.95th=[11076], 00:20:32.479 | 99.99th=[11076] 00:20:32.479 bw ( KiB/s): min=37632, max=38400, per=33.28%, avg=37717.33, stdev=256.00, samples=9 00:20:32.479 iops : min= 294, max= 300, avg=294.67, stdev= 2.00, samples=9 00:20:32.479 lat (msec) : 10=24.05%, 20=75.95% 00:20:32.479 cpu : usr=94.60%, sys=4.84%, ctx=9, majf=0, minf=0 00:20:32.479 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:32.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.479 issued rwts: total=1476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.479 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:32.479 filename0: (groupid=0, jobs=1): err= 0: pid=83038: Fri Nov 15 11:04:18 2024 00:20:32.479 read: IOPS=294, BW=36.9MiB/s (38.7MB/s)(185MiB/5004msec) 00:20:32.479 slat (nsec): min=6732, max=88281, avg=24437.42, stdev=13593.60 00:20:32.479 clat (usec): min=7249, max=11970, avg=10112.66, stdev=234.79 00:20:32.479 lat (usec): min=7261, max=11996, avg=10137.10, stdev=236.00 00:20:32.479 clat percentiles (usec): 00:20:32.479 | 1.00th=[ 9896], 5.00th=[ 9896], 10.00th=[ 9896], 20.00th=[10028], 00:20:32.479 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10028], 60.00th=[10159], 00:20:32.479 | 70.00th=[10159], 80.00th=[10159], 90.00th=[10421], 95.00th=[10552], 00:20:32.479 | 99.00th=[10683], 99.50th=[10814], 99.90th=[11994], 99.95th=[11994], 00:20:32.479 | 99.99th=[11994] 00:20:32.479 bw ( KiB/s): min=37632, max=38400, per=33.28%, avg=37717.33, stdev=256.00, samples=9 00:20:32.479 iops : min= 294, max= 300, avg=294.67, stdev= 2.00, samples=9 00:20:32.479 lat (msec) : 10=24.86%, 20=75.14% 00:20:32.479 cpu : usr=94.80%, sys=4.64%, ctx=12, majf=0, minf=0 00:20:32.479 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:32.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.479 issued rwts: total=1476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.479 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:32.479 filename0: (groupid=0, jobs=1): err= 0: pid=83039: Fri Nov 15 11:04:18 2024 00:20:32.479 read: IOPS=295, BW=37.0MiB/s (38.8MB/s)(185MiB/5002msec) 00:20:32.479 slat (nsec): min=6433, max=71762, avg=17477.80, stdev=10201.23 00:20:32.479 clat (usec): min=3742, max=10948, avg=10105.19, stdev=353.76 00:20:32.479 lat (usec): min=3752, max=10969, avg=10122.67, stdev=354.41 00:20:32.479 clat percentiles (usec): 00:20:32.479 | 1.00th=[ 9634], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10028], 00:20:32.479 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10028], 60.00th=[10159], 00:20:32.479 | 70.00th=[10159], 80.00th=[10159], 90.00th=[10421], 95.00th=[10552], 00:20:32.479 | 99.00th=[10814], 99.50th=[10814], 99.90th=[10945], 99.95th=[10945], 00:20:32.479 | 99.99th=[10945] 00:20:32.479 bw ( KiB/s): min=36864, max=38400, per=33.35%, avg=37802.67, stdev=640.00, samples=9 00:20:32.479 iops : min= 288, max= 300, avg=295.33, stdev= 5.00, samples=9 00:20:32.479 lat (msec) : 4=0.20%, 10=17.78%, 20=82.01% 00:20:32.479 cpu : usr=94.68%, sys=4.78%, ctx=9, majf=0, minf=0 00:20:32.479 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:32.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.479 issued rwts: total=1479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.479 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:32.479 00:20:32.479 Run status group 0 (all jobs): 00:20:32.479 READ: bw=111MiB/s (116MB/s), 36.9MiB/s-37.0MiB/s (38.7MB/s-38.8MB/s), io=554MiB (581MB), run=5002-5004msec 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 bdev_null0 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 [2024-11-15 11:04:18.428982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 bdev_null1 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.479 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.480 bdev_null2 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.480 { 00:20:32.480 "params": { 00:20:32.480 "name": "Nvme$subsystem", 00:20:32.480 "trtype": "$TEST_TRANSPORT", 00:20:32.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.480 "adrfam": "ipv4", 00:20:32.480 "trsvcid": "$NVMF_PORT", 00:20:32.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.480 "hdgst": ${hdgst:-false}, 00:20:32.480 "ddgst": ${ddgst:-false} 00:20:32.480 }, 00:20:32.480 "method": "bdev_nvme_attach_controller" 00:20:32.480 } 00:20:32.480 EOF 00:20:32.480 )") 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.480 { 00:20:32.480 "params": { 00:20:32.480 "name": "Nvme$subsystem", 00:20:32.480 "trtype": "$TEST_TRANSPORT", 00:20:32.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.480 "adrfam": "ipv4", 00:20:32.480 "trsvcid": "$NVMF_PORT", 00:20:32.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.480 "hdgst": ${hdgst:-false}, 00:20:32.480 "ddgst": ${ddgst:-false} 00:20:32.480 }, 00:20:32.480 "method": "bdev_nvme_attach_controller" 00:20:32.480 } 00:20:32.480 EOF 00:20:32.480 )") 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:32.480 { 00:20:32.480 "params": { 00:20:32.480 "name": "Nvme$subsystem", 00:20:32.480 "trtype": "$TEST_TRANSPORT", 00:20:32.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.480 "adrfam": "ipv4", 00:20:32.480 "trsvcid": "$NVMF_PORT", 00:20:32.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.480 "hdgst": ${hdgst:-false}, 00:20:32.480 "ddgst": ${ddgst:-false} 00:20:32.480 }, 00:20:32.480 "method": "bdev_nvme_attach_controller" 00:20:32.480 } 00:20:32.480 EOF 00:20:32.480 )") 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:32.480 11:04:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:32.480 "params": { 00:20:32.480 "name": "Nvme0", 00:20:32.480 "trtype": "tcp", 00:20:32.480 "traddr": "10.0.0.3", 00:20:32.480 "adrfam": "ipv4", 00:20:32.480 "trsvcid": "4420", 00:20:32.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:32.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:32.480 "hdgst": false, 00:20:32.480 "ddgst": false 00:20:32.480 }, 00:20:32.480 "method": "bdev_nvme_attach_controller" 00:20:32.480 },{ 00:20:32.480 "params": { 00:20:32.480 "name": "Nvme1", 00:20:32.480 "trtype": "tcp", 00:20:32.480 "traddr": "10.0.0.3", 00:20:32.480 "adrfam": "ipv4", 00:20:32.480 "trsvcid": "4420", 00:20:32.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.480 "hdgst": false, 00:20:32.480 "ddgst": false 00:20:32.480 }, 00:20:32.481 "method": "bdev_nvme_attach_controller" 00:20:32.481 },{ 00:20:32.481 "params": { 00:20:32.481 "name": "Nvme2", 00:20:32.481 "trtype": "tcp", 00:20:32.481 "traddr": "10.0.0.3", 00:20:32.481 "adrfam": "ipv4", 00:20:32.481 "trsvcid": "4420", 00:20:32.481 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:32.481 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:32.481 "hdgst": false, 00:20:32.481 "ddgst": false 00:20:32.481 }, 00:20:32.481 "method": "bdev_nvme_attach_controller" 00:20:32.481 }' 00:20:32.481 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:32.481 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:32.481 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:32.481 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:32.481 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:32.481 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:32.481 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:32.481 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:32.481 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:32.481 11:04:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:32.481 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:32.481 ... 00:20:32.481 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:32.481 ... 00:20:32.481 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:32.481 ... 00:20:32.481 fio-3.35 00:20:32.481 Starting 24 threads 00:20:44.690 00:20:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=83134: Fri Nov 15 11:04:29 2024 00:20:44.690 read: IOPS=226, BW=905KiB/s (926kB/s)(9064KiB/10021msec) 00:20:44.690 slat (usec): min=3, max=8033, avg=30.62, stdev=252.65 00:20:44.690 clat (msec): min=20, max=127, avg=70.59, stdev=23.22 00:20:44.690 lat (msec): min=20, max=127, avg=70.62, stdev=23.22 00:20:44.690 clat percentiles (msec): 00:20:44.690 | 1.00th=[ 24], 5.00th=[ 30], 10.00th=[ 42], 20.00th=[ 51], 00:20:44.690 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 72], 00:20:44.690 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 108], 00:20:44.690 | 99.00th=[ 116], 99.50th=[ 116], 99.90th=[ 122], 99.95th=[ 122], 00:20:44.690 | 99.99th=[ 128] 00:20:44.690 bw ( KiB/s): min= 712, max= 1664, per=4.15%, avg=900.20, stdev=225.08, samples=20 00:20:44.690 iops : min= 178, max= 416, avg=225.00, stdev=56.25, samples=20 00:20:44.690 lat (msec) : 50=20.21%, 100=65.58%, 250=14.21% 00:20:44.690 cpu : usr=36.29%, sys=1.34%, ctx=1113, majf=0, minf=9 00:20:44.690 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=78.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.690 complete : 0=0.0%, 4=88.3%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.690 issued rwts: total=2266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=83135: Fri Nov 15 11:04:29 2024 00:20:44.690 read: IOPS=226, BW=906KiB/s (928kB/s)(9092KiB/10033msec) 00:20:44.690 slat (usec): min=3, max=8036, avg=27.64, stdev=252.47 00:20:44.690 clat (msec): min=6, max=138, avg=70.41, stdev=25.24 00:20:44.690 lat (msec): min=6, max=138, avg=70.44, stdev=25.25 00:20:44.690 clat percentiles (msec): 00:20:44.690 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 49], 00:20:44.690 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 73], 00:20:44.690 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 109], 00:20:44.690 | 99.00th=[ 114], 99.50th=[ 120], 99.90th=[ 133], 99.95th=[ 138], 00:20:44.690 | 99.99th=[ 140] 00:20:44.690 bw ( KiB/s): min= 664, max= 2180, per=4.17%, avg=905.70, stdev=325.16, samples=20 00:20:44.690 iops : min= 166, max= 545, avg=226.40, stdev=81.29, samples=20 00:20:44.690 lat (msec) : 10=0.88%, 20=3.26%, 50=16.45%, 100=63.44%, 250=15.97% 00:20:44.690 cpu : usr=38.28%, sys=1.31%, ctx=1286, majf=0, minf=9 00:20:44.690 IO depths : 1=0.2%, 2=0.8%, 4=2.6%, 8=80.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.690 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.690 issued rwts: total=2273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=83136: Fri Nov 15 11:04:29 2024 00:20:44.690 read: IOPS=228, BW=915KiB/s (936kB/s)(9180KiB/10038msec) 00:20:44.690 slat (usec): min=3, max=8064, avg=38.88, stdev=334.97 00:20:44.690 clat (msec): min=5, max=143, avg=69.70, stdev=26.21 00:20:44.690 lat (msec): min=5, max=143, avg=69.74, stdev=26.20 00:20:44.690 clat percentiles (msec): 00:20:44.690 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 32], 20.00th=[ 48], 00:20:44.690 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 74], 00:20:44.690 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 109], 00:20:44.690 | 99.00th=[ 115], 99.50th=[ 116], 99.90th=[ 132], 99.95th=[ 133], 00:20:44.690 | 99.99th=[ 144] 00:20:44.690 bw ( KiB/s): min= 664, max= 2306, per=4.20%, avg=912.75, stdev=353.27, samples=20 00:20:44.690 iops : min= 166, max= 576, avg=228.10, stdev=88.21, samples=20 00:20:44.690 lat (msec) : 10=2.22%, 20=3.36%, 50=16.91%, 100=63.05%, 250=14.47% 00:20:44.690 cpu : usr=40.58%, sys=1.70%, ctx=1160, majf=0, minf=9 00:20:44.690 IO depths : 1=0.3%, 2=1.0%, 4=2.9%, 8=79.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.690 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.690 issued rwts: total=2295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=83137: Fri Nov 15 11:04:29 2024 00:20:44.690 read: IOPS=234, BW=939KiB/s (962kB/s)(9396KiB/10006msec) 00:20:44.690 slat (usec): min=7, max=8054, avg=37.18, stdev=340.91 00:20:44.690 clat (msec): min=6, max=119, avg=67.96, stdev=23.55 00:20:44.690 lat (msec): min=6, max=119, avg=68.00, stdev=23.55 00:20:44.690 clat percentiles (msec): 00:20:44.690 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 39], 20.00th=[ 48], 00:20:44.690 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:20:44.690 | 70.00th=[ 78], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 108], 00:20:44.690 | 99.00th=[ 113], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 120], 00:20:44.690 | 99.99th=[ 120] 00:20:44.690 bw ( KiB/s): min= 712, max= 1552, per=4.20%, avg=912.26, stdev=210.25, samples=19 00:20:44.690 iops : min= 178, max= 388, avg=228.05, stdev=52.56, samples=19 00:20:44.690 lat (msec) : 10=0.77%, 20=0.94%, 50=23.07%, 100=63.22%, 250=12.01% 00:20:44.690 cpu : usr=32.59%, sys=1.33%, ctx=923, majf=0, minf=9 00:20:44.690 IO depths : 1=0.1%, 2=0.6%, 4=2.8%, 8=81.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.690 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.690 issued rwts: total=2349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=83138: Fri Nov 15 11:04:29 2024 00:20:44.690 read: IOPS=237, BW=951KiB/s (974kB/s)(9512KiB/10001msec) 00:20:44.690 slat (usec): min=6, max=8043, avg=22.82, stdev=164.87 00:20:44.690 clat (usec): min=1497, max=125826, avg=67177.91, stdev=27041.13 00:20:44.690 lat (usec): min=1520, max=125850, avg=67200.73, stdev=27039.56 00:20:44.690 clat percentiles (msec): 00:20:44.690 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 32], 20.00th=[ 47], 00:20:44.690 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:20:44.690 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 108], 00:20:44.690 | 99.00th=[ 113], 99.50th=[ 120], 99.90th=[ 126], 99.95th=[ 126], 00:20:44.690 | 99.99th=[ 126] 00:20:44.690 bw ( KiB/s): min= 712, max= 1408, per=4.04%, avg=877.32, stdev=184.10, samples=19 00:20:44.690 iops : min= 178, max= 352, avg=219.32, stdev=46.01, samples=19 00:20:44.690 lat (msec) : 2=0.67%, 4=3.36%, 10=1.72%, 20=1.22%, 50=15.77% 00:20:44.691 lat (msec) : 100=66.02%, 250=11.23% 00:20:44.691 cpu : usr=38.16%, sys=1.51%, ctx=1368, majf=0, minf=9 00:20:44.691 IO depths : 1=0.2%, 2=1.9%, 4=6.9%, 8=76.2%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 complete : 0=0.0%, 4=88.9%, 8=9.6%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 issued rwts: total=2378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.691 filename0: (groupid=0, jobs=1): err= 0: pid=83139: Fri Nov 15 11:04:29 2024 00:20:44.691 read: IOPS=223, BW=896KiB/s (917kB/s)(8972KiB/10014msec) 00:20:44.691 slat (usec): min=7, max=11018, avg=42.81, stdev=367.04 00:20:44.691 clat (msec): min=18, max=140, avg=71.20, stdev=24.44 00:20:44.691 lat (msec): min=18, max=140, avg=71.24, stdev=24.44 00:20:44.691 clat percentiles (msec): 00:20:44.691 | 1.00th=[ 22], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 51], 00:20:44.691 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 74], 00:20:44.691 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 109], 00:20:44.691 | 99.00th=[ 115], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 131], 00:20:44.691 | 99.99th=[ 142] 00:20:44.691 bw ( KiB/s): min= 640, max= 1904, per=4.08%, avg=886.37, stdev=281.84, samples=19 00:20:44.691 iops : min= 160, max= 476, avg=221.53, stdev=70.48, samples=19 00:20:44.691 lat (msec) : 20=0.18%, 50=19.93%, 100=65.40%, 250=14.49% 00:20:44.691 cpu : usr=46.89%, sys=1.93%, ctx=1181, majf=0, minf=9 00:20:44.691 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 complete : 0=0.0%, 4=88.7%, 8=10.2%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 issued rwts: total=2243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.691 filename0: (groupid=0, jobs=1): err= 0: pid=83140: Fri Nov 15 11:04:29 2024 00:20:44.691 read: IOPS=214, BW=859KiB/s (880kB/s)(8604KiB/10017msec) 00:20:44.691 slat (usec): min=4, max=4048, avg=31.32, stdev=208.93 00:20:44.691 clat (msec): min=16, max=131, avg=74.35, stdev=22.80 00:20:44.691 lat (msec): min=16, max=131, avg=74.38, stdev=22.81 00:20:44.691 clat percentiles (msec): 00:20:44.691 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 46], 20.00th=[ 59], 00:20:44.691 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 80], 00:20:44.691 | 70.00th=[ 89], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 109], 00:20:44.691 | 99.00th=[ 120], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:20:44.691 | 99.99th=[ 132] 00:20:44.691 bw ( KiB/s): min= 688, max= 1536, per=3.94%, avg=855.35, stdev=195.14, samples=20 00:20:44.691 iops : min= 172, max= 384, avg=213.75, stdev=48.80, samples=20 00:20:44.691 lat (msec) : 20=0.19%, 50=14.13%, 100=70.39%, 250=15.30% 00:20:44.691 cpu : usr=40.93%, sys=1.58%, ctx=1577, majf=0, minf=9 00:20:44.691 IO depths : 1=0.1%, 2=2.7%, 4=10.6%, 8=72.2%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 complete : 0=0.0%, 4=90.0%, 8=7.7%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.691 filename0: (groupid=0, jobs=1): err= 0: pid=83141: Fri Nov 15 11:04:29 2024 00:20:44.691 read: IOPS=226, BW=906KiB/s (928kB/s)(9060KiB/10002msec) 00:20:44.691 slat (usec): min=6, max=10059, avg=46.88, stdev=462.81 00:20:44.691 clat (msec): min=2, max=143, avg=70.42, stdev=25.96 00:20:44.691 lat (msec): min=2, max=143, avg=70.47, stdev=25.96 00:20:44.691 clat percentiles (msec): 00:20:44.691 | 1.00th=[ 4], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 50], 00:20:44.691 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 75], 00:20:44.691 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 110], 00:20:44.691 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 144], 00:20:44.691 | 99.99th=[ 144] 00:20:44.691 bw ( KiB/s): min= 640, max= 1520, per=3.95%, avg=857.11, stdev=204.99, samples=19 00:20:44.691 iops : min= 160, max= 380, avg=214.26, stdev=51.24, samples=19 00:20:44.691 lat (msec) : 4=1.06%, 10=1.99%, 20=1.02%, 50=16.78%, 100=65.25% 00:20:44.691 lat (msec) : 250=13.91% 00:20:44.691 cpu : usr=32.85%, sys=1.14%, ctx=929, majf=0, minf=9 00:20:44.691 IO depths : 1=0.1%, 2=1.7%, 4=6.8%, 8=76.3%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 complete : 0=0.0%, 4=88.9%, 8=9.6%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 issued rwts: total=2265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.691 filename1: (groupid=0, jobs=1): err= 0: pid=83142: Fri Nov 15 11:04:29 2024 00:20:44.691 read: IOPS=218, BW=873KiB/s (894kB/s)(8740KiB/10009msec) 00:20:44.691 slat (usec): min=6, max=7992, avg=34.40, stdev=250.95 00:20:44.691 clat (msec): min=11, max=120, avg=73.10, stdev=22.16 00:20:44.691 lat (msec): min=11, max=120, avg=73.13, stdev=22.16 00:20:44.691 clat percentiles (msec): 00:20:44.691 | 1.00th=[ 20], 5.00th=[ 33], 10.00th=[ 45], 20.00th=[ 58], 00:20:44.691 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 75], 00:20:44.691 | 70.00th=[ 89], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 108], 00:20:44.691 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 121], 00:20:44.691 | 99.99th=[ 121] 00:20:44.691 bw ( KiB/s): min= 656, max= 1520, per=3.92%, avg=852.11, stdev=193.31, samples=19 00:20:44.691 iops : min= 164, max= 380, avg=213.00, stdev=48.31, samples=19 00:20:44.691 lat (msec) : 20=1.14%, 50=13.73%, 100=71.08%, 250=14.05% 00:20:44.691 cpu : usr=45.10%, sys=1.89%, ctx=1315, majf=0, minf=9 00:20:44.691 IO depths : 1=0.2%, 2=2.5%, 4=9.5%, 8=73.4%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 complete : 0=0.0%, 4=89.6%, 8=8.4%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 issued rwts: total=2185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.691 filename1: (groupid=0, jobs=1): err= 0: pid=83143: Fri Nov 15 11:04:29 2024 00:20:44.691 read: IOPS=225, BW=903KiB/s (924kB/s)(9056KiB/10033msec) 00:20:44.691 slat (usec): min=7, max=8031, avg=30.71, stdev=266.36 00:20:44.691 clat (msec): min=8, max=139, avg=70.71, stdev=26.52 00:20:44.691 lat (msec): min=8, max=139, avg=70.74, stdev=26.52 00:20:44.691 clat percentiles (msec): 00:20:44.691 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 48], 00:20:44.691 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 77], 00:20:44.691 | 70.00th=[ 90], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 109], 00:20:44.691 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 136], 99.95th=[ 138], 00:20:44.691 | 99.99th=[ 140] 00:20:44.691 bw ( KiB/s): min= 632, max= 2200, per=4.14%, avg=899.10, stdev=343.52, samples=20 00:20:44.691 iops : min= 158, max= 550, avg=224.75, stdev=85.89, samples=20 00:20:44.691 lat (msec) : 10=0.62%, 20=1.68%, 50=22.31%, 100=58.61%, 250=16.78% 00:20:44.691 cpu : usr=36.75%, sys=1.58%, ctx=1238, majf=0, minf=9 00:20:44.691 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=81.8%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 issued rwts: total=2264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.691 filename1: (groupid=0, jobs=1): err= 0: pid=83144: Fri Nov 15 11:04:29 2024 00:20:44.691 read: IOPS=232, BW=928KiB/s (951kB/s)(9308KiB/10027msec) 00:20:44.691 slat (usec): min=4, max=8055, avg=27.63, stdev=213.21 00:20:44.691 clat (msec): min=15, max=122, avg=68.79, stdev=23.13 00:20:44.691 lat (msec): min=15, max=122, avg=68.82, stdev=23.13 00:20:44.691 clat percentiles (msec): 00:20:44.691 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 37], 20.00th=[ 48], 00:20:44.691 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:20:44.691 | 70.00th=[ 81], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 108], 00:20:44.691 | 99.00th=[ 112], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 118], 00:20:44.691 | 99.99th=[ 123] 00:20:44.691 bw ( KiB/s): min= 712, max= 1824, per=4.26%, avg=924.30, stdev=253.90, samples=20 00:20:44.691 iops : min= 178, max= 456, avg=231.05, stdev=63.49, samples=20 00:20:44.691 lat (msec) : 20=0.09%, 50=22.13%, 100=65.66%, 250=12.12% 00:20:44.691 cpu : usr=37.73%, sys=1.51%, ctx=1172, majf=0, minf=9 00:20:44.691 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 issued rwts: total=2327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.691 filename1: (groupid=0, jobs=1): err= 0: pid=83145: Fri Nov 15 11:04:29 2024 00:20:44.691 read: IOPS=221, BW=886KiB/s (907kB/s)(8868KiB/10009msec) 00:20:44.691 slat (usec): min=3, max=8034, avg=25.91, stdev=200.15 00:20:44.691 clat (msec): min=15, max=137, avg=72.11, stdev=22.82 00:20:44.691 lat (msec): min=15, max=137, avg=72.14, stdev=22.83 00:20:44.691 clat percentiles (msec): 00:20:44.691 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 44], 20.00th=[ 53], 00:20:44.691 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:20:44.691 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 110], 00:20:44.691 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 136], 00:20:44.691 | 99.99th=[ 138] 00:20:44.691 bw ( KiB/s): min= 688, max= 1428, per=4.06%, avg=882.65, stdev=193.00, samples=20 00:20:44.691 iops : min= 172, max= 357, avg=220.55, stdev=48.27, samples=20 00:20:44.691 lat (msec) : 20=0.59%, 50=18.67%, 100=66.40%, 250=14.34% 00:20:44.691 cpu : usr=33.54%, sys=1.11%, ctx=1108, majf=0, minf=9 00:20:44.691 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:44.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.691 issued rwts: total=2217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.691 filename1: (groupid=0, jobs=1): err= 0: pid=83146: Fri Nov 15 11:04:29 2024 00:20:44.691 read: IOPS=220, BW=883KiB/s (904kB/s)(8872KiB/10053msec) 00:20:44.691 slat (usec): min=3, max=8006, avg=30.05, stdev=269.78 00:20:44.691 clat (msec): min=4, max=140, avg=72.28, stdev=27.65 00:20:44.691 lat (msec): min=4, max=140, avg=72.31, stdev=27.65 00:20:44.691 clat percentiles (msec): 00:20:44.692 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 26], 20.00th=[ 55], 00:20:44.692 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 79], 00:20:44.692 | 70.00th=[ 92], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 111], 00:20:44.692 | 99.00th=[ 120], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:20:44.692 | 99.99th=[ 140] 00:20:44.692 bw ( KiB/s): min= 640, max= 2432, per=4.05%, avg=880.40, stdev=385.20, samples=20 00:20:44.692 iops : min= 160, max= 608, avg=220.05, stdev=96.30, samples=20 00:20:44.692 lat (msec) : 10=2.25%, 20=3.29%, 50=12.22%, 100=63.44%, 250=18.80% 00:20:44.692 cpu : usr=39.35%, sys=1.52%, ctx=1156, majf=0, minf=9 00:20:44.692 IO depths : 1=0.2%, 2=2.3%, 4=8.8%, 8=73.4%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 complete : 0=0.0%, 4=90.0%, 8=8.1%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 issued rwts: total=2218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.692 filename1: (groupid=0, jobs=1): err= 0: pid=83147: Fri Nov 15 11:04:29 2024 00:20:44.692 read: IOPS=230, BW=922KiB/s (944kB/s)(9228KiB/10008msec) 00:20:44.692 slat (usec): min=3, max=8031, avg=31.08, stdev=256.14 00:20:44.692 clat (msec): min=7, max=135, avg=69.27, stdev=23.83 00:20:44.692 lat (msec): min=7, max=135, avg=69.31, stdev=23.83 00:20:44.692 clat percentiles (msec): 00:20:44.692 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 48], 00:20:44.692 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 72], 00:20:44.692 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 108], 00:20:44.692 | 99.00th=[ 113], 99.50th=[ 117], 99.90th=[ 129], 99.95th=[ 130], 00:20:44.692 | 99.99th=[ 136] 00:20:44.692 bw ( KiB/s): min= 680, max= 1524, per=4.13%, avg=896.89, stdev=208.75, samples=19 00:20:44.692 iops : min= 170, max= 381, avg=224.21, stdev=52.18, samples=19 00:20:44.692 lat (msec) : 10=0.74%, 20=1.21%, 50=20.46%, 100=63.03%, 250=14.56% 00:20:44.692 cpu : usr=37.83%, sys=1.65%, ctx=1237, majf=0, minf=9 00:20:44.692 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 issued rwts: total=2307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.692 filename1: (groupid=0, jobs=1): err= 0: pid=83148: Fri Nov 15 11:04:29 2024 00:20:44.692 read: IOPS=229, BW=918KiB/s (940kB/s)(9188KiB/10007msec) 00:20:44.692 slat (usec): min=3, max=8057, avg=44.00, stdev=417.71 00:20:44.692 clat (msec): min=8, max=128, avg=69.51, stdev=23.78 00:20:44.692 lat (msec): min=8, max=128, avg=69.55, stdev=23.79 00:20:44.692 clat percentiles (msec): 00:20:44.692 | 1.00th=[ 15], 5.00th=[ 25], 10.00th=[ 39], 20.00th=[ 50], 00:20:44.692 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 73], 00:20:44.692 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 108], 00:20:44.692 | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 123], 99.95th=[ 128], 00:20:44.692 | 99.99th=[ 129] 00:20:44.692 bw ( KiB/s): min= 712, max= 1634, per=4.12%, avg=895.16, stdev=224.83, samples=19 00:20:44.692 iops : min= 178, max= 408, avg=223.74, stdev=56.11, samples=19 00:20:44.692 lat (msec) : 10=0.57%, 20=1.22%, 50=19.46%, 100=66.57%, 250=12.19% 00:20:44.692 cpu : usr=37.48%, sys=1.32%, ctx=1085, majf=0, minf=9 00:20:44.692 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 issued rwts: total=2297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.692 filename1: (groupid=0, jobs=1): err= 0: pid=83149: Fri Nov 15 11:04:29 2024 00:20:44.692 read: IOPS=217, BW=868KiB/s (889kB/s)(8708KiB/10027msec) 00:20:44.692 slat (usec): min=3, max=8051, avg=37.83, stdev=333.45 00:20:44.692 clat (msec): min=14, max=139, avg=73.46, stdev=25.12 00:20:44.692 lat (msec): min=14, max=139, avg=73.49, stdev=25.13 00:20:44.692 clat percentiles (msec): 00:20:44.692 | 1.00th=[ 22], 5.00th=[ 25], 10.00th=[ 39], 20.00th=[ 56], 00:20:44.692 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 78], 00:20:44.692 | 70.00th=[ 89], 80.00th=[ 100], 90.00th=[ 107], 95.00th=[ 110], 00:20:44.692 | 99.00th=[ 124], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:20:44.692 | 99.99th=[ 140] 00:20:44.692 bw ( KiB/s): min= 640, max= 1907, per=3.99%, avg=866.85, stdev=275.34, samples=20 00:20:44.692 iops : min= 160, max= 476, avg=216.65, stdev=68.69, samples=20 00:20:44.692 lat (msec) : 20=0.83%, 50=16.72%, 100=63.90%, 250=18.56% 00:20:44.692 cpu : usr=40.74%, sys=1.56%, ctx=1251, majf=0, minf=9 00:20:44.692 IO depths : 1=0.1%, 2=1.7%, 4=7.1%, 8=75.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 issued rwts: total=2177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.692 filename2: (groupid=0, jobs=1): err= 0: pid=83150: Fri Nov 15 11:04:29 2024 00:20:44.692 read: IOPS=225, BW=903KiB/s (925kB/s)(9084KiB/10057msec) 00:20:44.692 slat (usec): min=3, max=11034, avg=33.32, stdev=348.57 00:20:44.692 clat (msec): min=6, max=136, avg=70.51, stdev=26.59 00:20:44.692 lat (msec): min=6, max=136, avg=70.55, stdev=26.58 00:20:44.692 clat percentiles (msec): 00:20:44.692 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 49], 00:20:44.692 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 77], 00:20:44.692 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 105], 95.00th=[ 108], 00:20:44.692 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 131], 99.95th=[ 134], 00:20:44.692 | 99.99th=[ 138] 00:20:44.692 bw ( KiB/s): min= 664, max= 2285, per=4.16%, avg=903.35, stdev=350.20, samples=20 00:20:44.692 iops : min= 166, max= 571, avg=225.80, stdev=87.51, samples=20 00:20:44.692 lat (msec) : 10=2.03%, 20=2.20%, 50=17.26%, 100=62.62%, 250=15.90% 00:20:44.692 cpu : usr=40.66%, sys=1.54%, ctx=1212, majf=0, minf=0 00:20:44.692 IO depths : 1=0.3%, 2=1.1%, 4=3.7%, 8=78.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.692 filename2: (groupid=0, jobs=1): err= 0: pid=83151: Fri Nov 15 11:04:29 2024 00:20:44.692 read: IOPS=229, BW=916KiB/s (938kB/s)(9204KiB/10046msec) 00:20:44.692 slat (usec): min=3, max=8053, avg=30.68, stdev=301.50 00:20:44.692 clat (msec): min=7, max=132, avg=69.62, stdev=25.68 00:20:44.692 lat (msec): min=7, max=132, avg=69.65, stdev=25.67 00:20:44.692 clat percentiles (msec): 00:20:44.692 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 48], 00:20:44.692 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 73], 00:20:44.692 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:20:44.692 | 99.00th=[ 115], 99.50th=[ 117], 99.90th=[ 130], 99.95th=[ 132], 00:20:44.692 | 99.99th=[ 132] 00:20:44.692 bw ( KiB/s): min= 664, max= 2240, per=4.21%, avg=913.80, stdev=339.01, samples=20 00:20:44.692 iops : min= 166, max= 560, avg=228.40, stdev=84.76, samples=20 00:20:44.692 lat (msec) : 10=1.30%, 20=2.78%, 50=18.21%, 100=62.67%, 250=15.04% 00:20:44.692 cpu : usr=35.76%, sys=1.42%, ctx=1096, majf=0, minf=9 00:20:44.692 IO depths : 1=0.2%, 2=0.9%, 4=2.9%, 8=79.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 issued rwts: total=2301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.692 filename2: (groupid=0, jobs=1): err= 0: pid=83152: Fri Nov 15 11:04:29 2024 00:20:44.692 read: IOPS=227, BW=911KiB/s (933kB/s)(9128KiB/10023msec) 00:20:44.692 slat (usec): min=4, max=12015, avg=36.08, stdev=365.12 00:20:44.692 clat (msec): min=15, max=138, avg=70.10, stdev=25.12 00:20:44.692 lat (msec): min=15, max=138, avg=70.13, stdev=25.12 00:20:44.692 clat percentiles (msec): 00:20:44.692 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 48], 00:20:44.692 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 73], 00:20:44.692 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 108], 00:20:44.692 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 132], 99.95th=[ 133], 00:20:44.692 | 99.99th=[ 138] 00:20:44.692 bw ( KiB/s): min= 664, max= 2023, per=4.17%, avg=905.85, stdev=300.37, samples=20 00:20:44.692 iops : min= 166, max= 505, avg=226.40, stdev=74.96, samples=20 00:20:44.692 lat (msec) : 20=1.31%, 50=21.12%, 100=63.94%, 250=13.63% 00:20:44.692 cpu : usr=39.23%, sys=1.36%, ctx=1424, majf=0, minf=9 00:20:44.692 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.692 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.692 filename2: (groupid=0, jobs=1): err= 0: pid=83153: Fri Nov 15 11:04:29 2024 00:20:44.692 read: IOPS=226, BW=906KiB/s (928kB/s)(9092KiB/10036msec) 00:20:44.692 slat (usec): min=5, max=8048, avg=34.97, stdev=317.75 00:20:44.692 clat (msec): min=2, max=131, avg=70.47, stdev=25.68 00:20:44.692 lat (msec): min=2, max=136, avg=70.50, stdev=25.69 00:20:44.692 clat percentiles (msec): 00:20:44.692 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 34], 20.00th=[ 50], 00:20:44.692 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 74], 00:20:44.692 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:20:44.692 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 129], 99.95th=[ 132], 00:20:44.692 | 99.99th=[ 132] 00:20:44.692 bw ( KiB/s): min= 672, max= 2272, per=4.15%, avg=902.70, stdev=344.61, samples=20 00:20:44.692 iops : min= 168, max= 568, avg=225.65, stdev=86.16, samples=20 00:20:44.692 lat (msec) : 4=0.09%, 10=0.62%, 20=2.82%, 50=17.29%, 100=63.57% 00:20:44.692 lat (msec) : 250=15.62% 00:20:44.692 cpu : usr=32.62%, sys=1.30%, ctx=914, majf=0, minf=9 00:20:44.692 IO depths : 1=0.1%, 2=0.7%, 4=3.1%, 8=79.7%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:44.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.693 complete : 0=0.0%, 4=88.5%, 8=10.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.693 issued rwts: total=2273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.693 filename2: (groupid=0, jobs=1): err= 0: pid=83154: Fri Nov 15 11:04:29 2024 00:20:44.693 read: IOPS=228, BW=915KiB/s (937kB/s)(9160KiB/10009msec) 00:20:44.693 slat (usec): min=4, max=8027, avg=31.06, stdev=264.68 00:20:44.693 clat (msec): min=10, max=135, avg=69.76, stdev=23.23 00:20:44.693 lat (msec): min=10, max=135, avg=69.79, stdev=23.23 00:20:44.693 clat percentiles (msec): 00:20:44.693 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 48], 00:20:44.693 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:20:44.693 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 108], 00:20:44.693 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 131], 99.95th=[ 133], 00:20:44.693 | 99.99th=[ 136] 00:20:44.693 bw ( KiB/s): min= 664, max= 1600, per=4.20%, avg=912.45, stdev=216.69, samples=20 00:20:44.693 iops : min= 166, max= 400, avg=228.00, stdev=54.19, samples=20 00:20:44.693 lat (msec) : 20=0.70%, 50=22.18%, 100=62.97%, 250=14.15% 00:20:44.693 cpu : usr=33.97%, sys=1.21%, ctx=954, majf=0, minf=10 00:20:44.693 IO depths : 1=0.1%, 2=0.6%, 4=2.1%, 8=81.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:44.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.693 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.693 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.693 filename2: (groupid=0, jobs=1): err= 0: pid=83155: Fri Nov 15 11:04:29 2024 00:20:44.693 read: IOPS=223, BW=896KiB/s (917kB/s)(8984KiB/10030msec) 00:20:44.693 slat (usec): min=3, max=12054, avg=35.60, stdev=378.76 00:20:44.693 clat (msec): min=10, max=145, avg=71.19, stdev=25.37 00:20:44.693 lat (msec): min=10, max=145, avg=71.22, stdev=25.37 00:20:44.693 clat percentiles (msec): 00:20:44.693 | 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 48], 00:20:44.693 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 74], 00:20:44.693 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 109], 00:20:44.693 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 142], 99.95th=[ 146], 00:20:44.693 | 99.99th=[ 146] 00:20:44.693 bw ( KiB/s): min= 656, max= 1920, per=4.12%, avg=894.30, stdev=289.26, samples=20 00:20:44.693 iops : min= 164, max= 480, avg=223.55, stdev=72.32, samples=20 00:20:44.693 lat (msec) : 20=1.42%, 50=20.44%, 100=62.51%, 250=15.63% 00:20:44.693 cpu : usr=35.75%, sys=1.31%, ctx=1005, majf=0, minf=9 00:20:44.693 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:44.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.693 complete : 0=0.0%, 4=88.5%, 8=10.5%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.693 issued rwts: total=2246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.693 filename2: (groupid=0, jobs=1): err= 0: pid=83156: Fri Nov 15 11:04:29 2024 00:20:44.693 read: IOPS=239, BW=957KiB/s (980kB/s)(9572KiB/10002msec) 00:20:44.693 slat (usec): min=6, max=8048, avg=34.07, stdev=246.59 00:20:44.693 clat (usec): min=823, max=128829, avg=66759.12, stdev=26884.60 00:20:44.693 lat (usec): min=831, max=128838, avg=66793.19, stdev=26887.72 00:20:44.693 clat percentiles (msec): 00:20:44.693 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 33], 20.00th=[ 47], 00:20:44.693 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:20:44.693 | 70.00th=[ 80], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 108], 00:20:44.693 | 99.00th=[ 114], 99.50th=[ 117], 99.90th=[ 121], 99.95th=[ 121], 00:20:44.693 | 99.99th=[ 129] 00:20:44.693 bw ( KiB/s): min= 664, max= 1296, per=4.07%, avg=884.89, stdev=183.03, samples=19 00:20:44.693 iops : min= 166, max= 324, avg=221.21, stdev=45.75, samples=19 00:20:44.693 lat (usec) : 1000=0.25% 00:20:44.693 lat (msec) : 4=3.51%, 10=2.21%, 20=1.04%, 50=18.64%, 100=61.68% 00:20:44.693 lat (msec) : 250=12.66% 00:20:44.693 cpu : usr=36.81%, sys=1.23%, ctx=997, majf=0, minf=9 00:20:44.693 IO depths : 1=0.2%, 2=0.8%, 4=2.8%, 8=80.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:44.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.693 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.693 issued rwts: total=2393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.693 filename2: (groupid=0, jobs=1): err= 0: pid=83157: Fri Nov 15 11:04:29 2024 00:20:44.693 read: IOPS=231, BW=927KiB/s (949kB/s)(9272KiB/10003msec) 00:20:44.693 slat (usec): min=4, max=8031, avg=30.46, stdev=245.27 00:20:44.693 clat (msec): min=5, max=116, avg=68.92, stdev=23.97 00:20:44.693 lat (msec): min=5, max=116, avg=68.95, stdev=23.97 00:20:44.693 clat percentiles (msec): 00:20:44.693 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 37], 20.00th=[ 50], 00:20:44.693 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:20:44.693 | 70.00th=[ 81], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 107], 00:20:44.693 | 99.00th=[ 113], 99.50th=[ 115], 99.90th=[ 117], 99.95th=[ 117], 00:20:44.693 | 99.99th=[ 117] 00:20:44.693 bw ( KiB/s): min= 712, max= 1648, per=4.13%, avg=896.32, stdev=221.45, samples=19 00:20:44.693 iops : min= 178, max= 412, avg=224.05, stdev=55.35, samples=19 00:20:44.693 lat (msec) : 10=1.34%, 20=1.12%, 50=18.29%, 100=66.44%, 250=12.81% 00:20:44.693 cpu : usr=37.31%, sys=1.49%, ctx=1065, majf=0, minf=9 00:20:44.693 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:44.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.693 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.693 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.693 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:44.693 00:20:44.693 Run status group 0 (all jobs): 00:20:44.693 READ: bw=21.2MiB/s (22.2MB/s), 859KiB/s-957KiB/s (880kB/s-980kB/s), io=213MiB (224MB), run=10001-10057msec 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.693 bdev_null0 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:44.693 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.694 [2024-11-15 11:04:29.928023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.694 bdev_null1 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.694 { 00:20:44.694 "params": { 00:20:44.694 "name": "Nvme$subsystem", 00:20:44.694 "trtype": "$TEST_TRANSPORT", 00:20:44.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.694 "adrfam": "ipv4", 00:20:44.694 "trsvcid": "$NVMF_PORT", 00:20:44.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.694 "hdgst": ${hdgst:-false}, 00:20:44.694 "ddgst": ${ddgst:-false} 00:20:44.694 }, 00:20:44.694 "method": "bdev_nvme_attach_controller" 00:20:44.694 } 00:20:44.694 EOF 00:20:44.694 )") 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.694 { 00:20:44.694 "params": { 00:20:44.694 "name": "Nvme$subsystem", 00:20:44.694 "trtype": "$TEST_TRANSPORT", 00:20:44.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.694 "adrfam": "ipv4", 00:20:44.694 "trsvcid": "$NVMF_PORT", 00:20:44.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.694 "hdgst": ${hdgst:-false}, 00:20:44.694 "ddgst": ${ddgst:-false} 00:20:44.694 }, 00:20:44.694 "method": "bdev_nvme_attach_controller" 00:20:44.694 } 00:20:44.694 EOF 00:20:44.694 )") 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:44.694 "params": { 00:20:44.694 "name": "Nvme0", 00:20:44.694 "trtype": "tcp", 00:20:44.694 "traddr": "10.0.0.3", 00:20:44.694 "adrfam": "ipv4", 00:20:44.694 "trsvcid": "4420", 00:20:44.694 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:44.694 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:44.694 "hdgst": false, 00:20:44.694 "ddgst": false 00:20:44.694 }, 00:20:44.694 "method": "bdev_nvme_attach_controller" 00:20:44.694 },{ 00:20:44.694 "params": { 00:20:44.694 "name": "Nvme1", 00:20:44.694 "trtype": "tcp", 00:20:44.694 "traddr": "10.0.0.3", 00:20:44.694 "adrfam": "ipv4", 00:20:44.694 "trsvcid": "4420", 00:20:44.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.694 "hdgst": false, 00:20:44.694 "ddgst": false 00:20:44.694 }, 00:20:44.694 "method": "bdev_nvme_attach_controller" 00:20:44.694 }' 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:44.694 11:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:44.694 11:04:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:44.694 11:04:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:44.694 11:04:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:44.694 11:04:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:44.694 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:44.694 ... 00:20:44.694 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:44.694 ... 00:20:44.694 fio-3.35 00:20:44.694 Starting 4 threads 00:20:49.968 00:20:49.968 filename0: (groupid=0, jobs=1): err= 0: pid=83289: Fri Nov 15 11:04:35 2024 00:20:49.968 read: IOPS=2166, BW=16.9MiB/s (17.8MB/s)(84.7MiB/5001msec) 00:20:49.968 slat (nsec): min=3747, max=89852, avg=18709.30, stdev=10929.37 00:20:49.968 clat (usec): min=504, max=6737, avg=3626.06, stdev=786.21 00:20:49.968 lat (usec): min=517, max=6782, avg=3644.77, stdev=785.71 00:20:49.968 clat percentiles (usec): 00:20:49.968 | 1.00th=[ 1745], 5.00th=[ 2008], 10.00th=[ 2311], 20.00th=[ 2900], 00:20:49.968 | 30.00th=[ 3392], 40.00th=[ 3720], 50.00th=[ 3916], 60.00th=[ 4015], 00:20:49.968 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4490], 00:20:49.968 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5669], 99.95th=[ 6259], 00:20:49.968 | 99.99th=[ 6521] 00:20:49.968 bw ( KiB/s): min=14592, max=19936, per=23.81%, avg=17634.11, stdev=1470.92, samples=9 00:20:49.968 iops : min= 1824, max= 2492, avg=2204.22, stdev=183.86, samples=9 00:20:49.968 lat (usec) : 750=0.01% 00:20:49.968 lat (msec) : 2=4.95%, 4=55.18%, 10=39.86% 00:20:49.968 cpu : usr=92.54%, sys=6.48%, ctx=161, majf=0, minf=0 00:20:49.968 IO depths : 1=1.4%, 2=13.8%, 4=56.9%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.968 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.968 issued rwts: total=10837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.968 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:49.968 filename0: (groupid=0, jobs=1): err= 0: pid=83290: Fri Nov 15 11:04:35 2024 00:20:49.968 read: IOPS=2387, BW=18.7MiB/s (19.6MB/s)(93.3MiB/5002msec) 00:20:49.968 slat (nsec): min=3511, max=97528, avg=22341.86, stdev=10868.39 00:20:49.968 clat (usec): min=925, max=5989, avg=3285.91, stdev=848.99 00:20:49.968 lat (usec): min=934, max=6037, avg=3308.26, stdev=848.36 00:20:49.968 clat percentiles (usec): 00:20:49.968 | 1.00th=[ 1745], 5.00th=[ 1942], 10.00th=[ 2089], 20.00th=[ 2311], 00:20:49.968 | 30.00th=[ 2606], 40.00th=[ 3064], 50.00th=[ 3490], 60.00th=[ 3785], 00:20:49.968 | 70.00th=[ 3949], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4359], 00:20:49.968 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 5407], 00:20:49.968 | 99.99th=[ 5538] 00:20:49.968 bw ( KiB/s): min=16256, max=21120, per=25.68%, avg=19018.67, stdev=1741.26, samples=9 00:20:49.968 iops : min= 2032, max= 2640, avg=2377.33, stdev=217.66, samples=9 00:20:49.968 lat (usec) : 1000=0.02% 00:20:49.968 lat (msec) : 2=7.40%, 4=65.97%, 10=26.61% 00:20:49.968 cpu : usr=93.44%, sys=5.50%, ctx=99, majf=0, minf=1 00:20:49.968 IO depths : 1=1.3%, 2=6.3%, 4=60.9%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.968 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.968 issued rwts: total=11942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.968 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:49.968 filename1: (groupid=0, jobs=1): err= 0: pid=83291: Fri Nov 15 11:04:35 2024 00:20:49.968 read: IOPS=2388, BW=18.7MiB/s (19.6MB/s)(93.3MiB/5002msec) 00:20:49.968 slat (usec): min=6, max=197, avg=19.54, stdev=10.81 00:20:49.968 clat (usec): min=1046, max=5928, avg=3294.41, stdev=878.94 00:20:49.968 lat (usec): min=1054, max=5962, avg=3313.94, stdev=878.59 00:20:49.968 clat percentiles (usec): 00:20:49.968 | 1.00th=[ 1549], 5.00th=[ 1909], 10.00th=[ 1975], 20.00th=[ 2278], 00:20:49.968 | 30.00th=[ 2737], 40.00th=[ 3097], 50.00th=[ 3490], 60.00th=[ 3752], 00:20:49.968 | 70.00th=[ 3916], 80.00th=[ 4113], 90.00th=[ 4359], 95.00th=[ 4490], 00:20:49.968 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5407], 99.95th=[ 5538], 00:20:49.968 | 99.99th=[ 5800] 00:20:49.968 bw ( KiB/s): min=16352, max=20400, per=25.69%, avg=19025.78, stdev=1309.37, samples=9 00:20:49.968 iops : min= 2044, max= 2550, avg=2378.22, stdev=163.67, samples=9 00:20:49.968 lat (msec) : 2=11.46%, 4=62.96%, 10=25.58% 00:20:49.968 cpu : usr=93.52%, sys=5.24%, ctx=81, majf=0, minf=0 00:20:49.968 IO depths : 1=1.0%, 2=6.5%, 4=60.7%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.968 complete : 0=0.0%, 4=97.5%, 8=2.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.968 issued rwts: total=11945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.968 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:49.968 filename1: (groupid=0, jobs=1): err= 0: pid=83292: Fri Nov 15 11:04:35 2024 00:20:49.968 read: IOPS=2314, BW=18.1MiB/s (19.0MB/s)(90.5MiB/5002msec) 00:20:49.968 slat (nsec): min=4003, max=80407, avg=20472.40, stdev=9547.94 00:20:49.968 clat (usec): min=751, max=6128, avg=3395.29, stdev=856.27 00:20:49.968 lat (usec): min=764, max=6168, avg=3415.77, stdev=855.87 00:20:49.968 clat percentiles (usec): 00:20:49.968 | 1.00th=[ 1106], 5.00th=[ 1909], 10.00th=[ 2180], 20.00th=[ 2474], 00:20:49.969 | 30.00th=[ 2900], 40.00th=[ 3359], 50.00th=[ 3720], 60.00th=[ 3949], 00:20:49.969 | 70.00th=[ 4015], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4424], 00:20:49.969 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5473], 99.95th=[ 5538], 00:20:49.969 | 99.99th=[ 5800] 00:20:49.969 bw ( KiB/s): min=16240, max=21376, per=24.80%, avg=18369.78, stdev=1689.69, samples=9 00:20:49.969 iops : min= 2030, max= 2672, avg=2296.22, stdev=211.21, samples=9 00:20:49.969 lat (usec) : 1000=0.24% 00:20:49.969 lat (msec) : 2=5.50%, 4=63.65%, 10=30.61% 00:20:49.969 cpu : usr=94.20%, sys=4.86%, ctx=11, majf=0, minf=0 00:20:49.969 IO depths : 1=1.1%, 2=9.0%, 4=59.3%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.969 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.969 issued rwts: total=11579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.969 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:49.969 00:20:49.969 Run status group 0 (all jobs): 00:20:49.969 READ: bw=72.3MiB/s (75.8MB/s), 16.9MiB/s-18.7MiB/s (17.8MB/s-19.6MB/s), io=362MiB (379MB), run=5001-5002msec 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.969 11:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:49.969 ************************************ 00:20:49.969 END TEST fio_dif_rand_params 00:20:49.969 ************************************ 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.969 00:20:49.969 real 0m23.670s 00:20:49.969 user 2m6.360s 00:20:49.969 sys 0m6.186s 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.969 11:04:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:49.969 11:04:36 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:49.969 11:04:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:49.969 11:04:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.969 11:04:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:49.969 ************************************ 00:20:49.969 START TEST fio_dif_digest 00:20:49.969 ************************************ 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:49.969 bdev_null0 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:49.969 [2024-11-15 11:04:36.113407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.969 { 00:20:49.969 "params": { 00:20:49.969 "name": "Nvme$subsystem", 00:20:49.969 "trtype": "$TEST_TRANSPORT", 00:20:49.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.969 "adrfam": "ipv4", 00:20:49.969 "trsvcid": "$NVMF_PORT", 00:20:49.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.969 "hdgst": ${hdgst:-false}, 00:20:49.969 "ddgst": ${ddgst:-false} 00:20:49.969 }, 00:20:49.969 "method": "bdev_nvme_attach_controller" 00:20:49.969 } 00:20:49.969 EOF 00:20:49.969 )") 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:49.969 "params": { 00:20:49.969 "name": "Nvme0", 00:20:49.969 "trtype": "tcp", 00:20:49.969 "traddr": "10.0.0.3", 00:20:49.969 "adrfam": "ipv4", 00:20:49.969 "trsvcid": "4420", 00:20:49.969 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:49.969 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:49.969 "hdgst": true, 00:20:49.969 "ddgst": true 00:20:49.969 }, 00:20:49.969 "method": "bdev_nvme_attach_controller" 00:20:49.969 }' 00:20:49.969 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:49.970 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:49.970 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:49.970 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:49.970 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:49.970 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:49.970 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:49.970 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:49.970 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:49.970 11:04:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:49.970 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:49.970 ... 00:20:49.970 fio-3.35 00:20:49.970 Starting 3 threads 00:21:02.225 00:21:02.225 filename0: (groupid=0, jobs=1): err= 0: pid=83402: Fri Nov 15 11:04:46 2024 00:21:02.225 read: IOPS=258, BW=32.4MiB/s (33.9MB/s)(324MiB/10002msec) 00:21:02.225 slat (nsec): min=5417, max=97157, avg=27412.50, stdev=13923.60 00:21:02.225 clat (usec): min=7979, max=12831, avg=11527.74, stdev=220.24 00:21:02.225 lat (usec): min=7992, max=12864, avg=11555.16, stdev=221.54 00:21:02.225 clat percentiles (usec): 00:21:02.225 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:21:02.225 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:21:02.225 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11731], 95.00th=[11863], 00:21:02.225 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12780], 99.95th=[12780], 00:21:02.225 | 99.99th=[12780] 00:21:02.225 bw ( KiB/s): min=33024, max=33792, per=33.29%, avg=33104.84, stdev=242.15, samples=19 00:21:02.225 iops : min= 258, max= 264, avg=258.63, stdev= 1.89, samples=19 00:21:02.225 lat (msec) : 10=0.12%, 20=99.88% 00:21:02.225 cpu : usr=95.24%, sys=4.17%, ctx=13, majf=0, minf=0 00:21:02.225 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:02.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.225 issued rwts: total=2589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.225 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:02.225 filename0: (groupid=0, jobs=1): err= 0: pid=83403: Fri Nov 15 11:04:46 2024 00:21:02.225 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(324MiB/10001msec) 00:21:02.225 slat (nsec): min=6642, max=68125, avg=20817.19, stdev=10694.86 00:21:02.225 clat (usec): min=4169, max=12366, avg=11529.30, stdev=345.19 00:21:02.225 lat (usec): min=4178, max=12392, avg=11550.12, stdev=345.85 00:21:02.225 clat percentiles (usec): 00:21:02.225 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:21:02.225 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:21:02.225 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11863], 95.00th=[11863], 00:21:02.225 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12256], 99.95th=[12387], 00:21:02.225 | 99.99th=[12387] 00:21:02.225 bw ( KiB/s): min=33024, max=33792, per=33.37%, avg=33185.68, stdev=321.68, samples=19 00:21:02.225 iops : min= 258, max= 264, avg=259.26, stdev= 2.51, samples=19 00:21:02.225 lat (msec) : 10=0.35%, 20=99.65% 00:21:02.225 cpu : usr=95.27%, sys=4.23%, ctx=22, majf=0, minf=0 00:21:02.225 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:02.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.226 issued rwts: total=2592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:02.226 filename0: (groupid=0, jobs=1): err= 0: pid=83404: Fri Nov 15 11:04:46 2024 00:21:02.226 read: IOPS=258, BW=32.4MiB/s (33.9MB/s)(324MiB/10002msec) 00:21:02.226 slat (nsec): min=4437, max=96853, avg=27193.11, stdev=13794.57 00:21:02.226 clat (usec): min=7946, max=12709, avg=11526.09, stdev=218.54 00:21:02.226 lat (usec): min=7960, max=12723, avg=11553.28, stdev=220.05 00:21:02.226 clat percentiles (usec): 00:21:02.226 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:21:02.226 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:21:02.226 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11731], 95.00th=[11863], 00:21:02.226 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12649], 99.95th=[12649], 00:21:02.226 | 99.99th=[12649] 00:21:02.226 bw ( KiB/s): min=33024, max=33792, per=33.29%, avg=33104.84, stdev=242.15, samples=19 00:21:02.226 iops : min= 258, max= 264, avg=258.63, stdev= 1.89, samples=19 00:21:02.226 lat (msec) : 10=0.12%, 20=99.88% 00:21:02.226 cpu : usr=94.31%, sys=5.20%, ctx=9, majf=0, minf=0 00:21:02.226 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:02.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.226 issued rwts: total=2589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:02.226 00:21:02.226 Run status group 0 (all jobs): 00:21:02.226 READ: bw=97.1MiB/s (102MB/s), 32.4MiB/s-32.4MiB/s (33.9MB/s-34.0MB/s), io=971MiB (1018MB), run=10001-10002msec 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:02.226 ************************************ 00:21:02.226 END TEST fio_dif_digest 00:21:02.226 ************************************ 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.226 00:21:02.226 real 0m11.045s 00:21:02.226 user 0m29.150s 00:21:02.226 sys 0m1.661s 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.226 11:04:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:02.226 11:04:47 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:02.226 11:04:47 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.226 rmmod nvme_tcp 00:21:02.226 rmmod nvme_fabrics 00:21:02.226 rmmod nvme_keyring 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82655 ']' 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82655 00:21:02.226 11:04:47 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82655 ']' 00:21:02.226 11:04:47 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82655 00:21:02.226 11:04:47 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:21:02.226 11:04:47 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.226 11:04:47 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82655 00:21:02.226 killing process with pid 82655 00:21:02.226 11:04:47 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.226 11:04:47 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.226 11:04:47 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82655' 00:21:02.226 11:04:47 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82655 00:21:02.226 11:04:47 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82655 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:02.226 11:04:47 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:02.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:02.226 Waiting for block devices as requested 00:21:02.226 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.226 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.226 11:04:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:02.226 11:04:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.226 11:04:48 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:02.226 00:21:02.226 real 0m59.970s 00:21:02.226 user 3m50.224s 00:21:02.226 sys 0m17.564s 00:21:02.226 11:04:48 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:02.226 11:04:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:02.226 ************************************ 00:21:02.226 END TEST nvmf_dif 00:21:02.226 ************************************ 00:21:02.226 11:04:48 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:02.226 11:04:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:02.226 11:04:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.226 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:21:02.226 ************************************ 00:21:02.226 START TEST nvmf_abort_qd_sizes 00:21:02.226 ************************************ 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:02.226 * Looking for test storage... 00:21:02.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.226 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:02.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.227 --rc genhtml_branch_coverage=1 00:21:02.227 --rc genhtml_function_coverage=1 00:21:02.227 --rc genhtml_legend=1 00:21:02.227 --rc geninfo_all_blocks=1 00:21:02.227 --rc geninfo_unexecuted_blocks=1 00:21:02.227 00:21:02.227 ' 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:02.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.227 --rc genhtml_branch_coverage=1 00:21:02.227 --rc genhtml_function_coverage=1 00:21:02.227 --rc genhtml_legend=1 00:21:02.227 --rc geninfo_all_blocks=1 00:21:02.227 --rc geninfo_unexecuted_blocks=1 00:21:02.227 00:21:02.227 ' 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:02.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.227 --rc genhtml_branch_coverage=1 00:21:02.227 --rc genhtml_function_coverage=1 00:21:02.227 --rc genhtml_legend=1 00:21:02.227 --rc geninfo_all_blocks=1 00:21:02.227 --rc geninfo_unexecuted_blocks=1 00:21:02.227 00:21:02.227 ' 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:02.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.227 --rc genhtml_branch_coverage=1 00:21:02.227 --rc genhtml_function_coverage=1 00:21:02.227 --rc genhtml_legend=1 00:21:02.227 --rc geninfo_all_blocks=1 00:21:02.227 --rc geninfo_unexecuted_blocks=1 00:21:02.227 00:21:02.227 ' 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:02.227 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:02.227 Cannot find device "nvmf_init_br" 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:02.227 Cannot find device "nvmf_init_br2" 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:02.227 Cannot find device "nvmf_tgt_br" 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:02.227 Cannot find device "nvmf_tgt_br2" 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:02.227 Cannot find device "nvmf_init_br" 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:02.227 Cannot find device "nvmf_init_br2" 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:02.227 Cannot find device "nvmf_tgt_br" 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:02.227 Cannot find device "nvmf_tgt_br2" 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:02.227 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:02.228 Cannot find device "nvmf_br" 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:02.228 Cannot find device "nvmf_init_if" 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:02.228 Cannot find device "nvmf_init_if2" 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:02.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:02.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:02.228 11:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:02.228 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:02.228 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:02.228 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:02.228 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:02.228 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:02.228 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:02.228 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:02.228 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:02.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:02.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:21:02.228 00:21:02.228 --- 10.0.0.3 ping statistics --- 00:21:02.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.228 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:02.228 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:02.228 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:02.228 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:21:02.228 00:21:02.228 --- 10.0.0.4 ping statistics --- 00:21:02.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.228 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:02.228 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:02.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:02.228 00:21:02.228 --- 10.0.0.1 ping statistics --- 00:21:02.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.228 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:02.228 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:02.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:21:02.487 00:21:02.487 --- 10.0.0.2 ping statistics --- 00:21:02.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.487 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:02.487 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.487 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:21:02.487 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:02.487 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:03.054 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:03.054 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:03.054 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84052 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84052 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84052 ']' 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.314 11:04:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:03.314 [2024-11-15 11:04:50.043955] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:21:03.314 [2024-11-15 11:04:50.044056] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.573 [2024-11-15 11:04:50.198215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.573 [2024-11-15 11:04:50.268709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.573 [2024-11-15 11:04:50.269035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.573 [2024-11-15 11:04:50.269244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.573 [2024-11-15 11:04:50.269559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.573 [2024-11-15 11:04:50.269698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.573 [2024-11-15 11:04:50.271263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.573 [2024-11-15 11:04:50.271410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.573 [2024-11-15 11:04:50.272502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.573 [2024-11-15 11:04:50.272571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.573 [2024-11-15 11:04:50.345926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:03.573 11:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.573 11:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:21:03.573 11:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.573 11:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.573 11:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:03.833 11:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:03.833 ************************************ 00:21:03.833 START TEST spdk_target_abort 00:21:03.833 ************************************ 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:03.833 spdk_targetn1 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:03.833 [2024-11-15 11:04:50.581085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:03.833 [2024-11-15 11:04:50.618427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:03.833 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:03.834 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:03.834 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:03.834 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:03.834 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:03.834 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:03.834 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:03.834 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:03.834 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:03.834 11:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:07.125 Initializing NVMe Controllers 00:21:07.125 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:07.125 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:07.125 Initialization complete. Launching workers. 00:21:07.125 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9627, failed: 0 00:21:07.125 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1034, failed to submit 8593 00:21:07.125 success 708, unsuccessful 326, failed 0 00:21:07.125 11:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:07.125 11:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:10.470 Initializing NVMe Controllers 00:21:10.470 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:10.470 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:10.470 Initialization complete. Launching workers. 00:21:10.470 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9015, failed: 0 00:21:10.470 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1149, failed to submit 7866 00:21:10.470 success 406, unsuccessful 743, failed 0 00:21:10.470 11:04:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:10.471 11:04:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:13.780 Initializing NVMe Controllers 00:21:13.780 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:13.780 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:13.780 Initialization complete. Launching workers. 00:21:13.780 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30511, failed: 0 00:21:13.780 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2356, failed to submit 28155 00:21:13.780 success 415, unsuccessful 1941, failed 0 00:21:13.780 11:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:13.780 11:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.780 11:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:13.780 11:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.780 11:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:13.780 11:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.780 11:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84052 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84052 ']' 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84052 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84052 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.348 killing process with pid 84052 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84052' 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84052 00:21:14.348 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84052 00:21:14.607 00:21:14.607 real 0m10.818s 00:21:14.607 user 0m41.765s 00:21:14.607 sys 0m2.023s 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.607 ************************************ 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:14.607 END TEST spdk_target_abort 00:21:14.607 ************************************ 00:21:14.607 11:05:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:14.607 11:05:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:14.607 11:05:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.607 11:05:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:14.607 ************************************ 00:21:14.607 START TEST kernel_target_abort 00:21:14.607 ************************************ 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:14.607 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:15.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:15.175 Waiting for block devices as requested 00:21:15.175 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:15.175 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:15.175 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:15.175 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:15.175 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:15.175 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:15.175 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:15.175 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:15.175 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:15.175 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:15.175 11:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:15.175 No valid GPT data, bailing 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:15.434 No valid GPT data, bailing 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:15.434 No valid GPT data, bailing 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:15.434 No valid GPT data, bailing 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:15.434 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca --hostid=02f14d39-9b07-4abc-bc4a-e88d43a336ca -a 10.0.0.1 -t tcp -s 4420 00:21:15.693 00:21:15.693 Discovery Log Number of Records 2, Generation counter 2 00:21:15.693 =====Discovery Log Entry 0====== 00:21:15.693 trtype: tcp 00:21:15.693 adrfam: ipv4 00:21:15.693 subtype: current discovery subsystem 00:21:15.693 treq: not specified, sq flow control disable supported 00:21:15.693 portid: 1 00:21:15.693 trsvcid: 4420 00:21:15.693 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:15.693 traddr: 10.0.0.1 00:21:15.693 eflags: none 00:21:15.693 sectype: none 00:21:15.693 =====Discovery Log Entry 1====== 00:21:15.693 trtype: tcp 00:21:15.693 adrfam: ipv4 00:21:15.693 subtype: nvme subsystem 00:21:15.693 treq: not specified, sq flow control disable supported 00:21:15.693 portid: 1 00:21:15.693 trsvcid: 4420 00:21:15.693 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:15.693 traddr: 10.0.0.1 00:21:15.693 eflags: none 00:21:15.693 sectype: none 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:15.693 11:05:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:18.982 Initializing NVMe Controllers 00:21:18.982 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:18.982 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:18.982 Initialization complete. Launching workers. 00:21:18.982 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33246, failed: 0 00:21:18.982 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33246, failed to submit 0 00:21:18.982 success 0, unsuccessful 33246, failed 0 00:21:18.982 11:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:18.982 11:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:22.270 Initializing NVMe Controllers 00:21:22.270 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:22.270 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:22.270 Initialization complete. Launching workers. 00:21:22.270 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66286, failed: 0 00:21:22.270 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28413, failed to submit 37873 00:21:22.270 success 0, unsuccessful 28413, failed 0 00:21:22.270 11:05:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:22.270 11:05:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:25.558 Initializing NVMe Controllers 00:21:25.558 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:25.558 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:25.558 Initialization complete. Launching workers. 00:21:25.558 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75567, failed: 0 00:21:25.558 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18834, failed to submit 56733 00:21:25.558 success 0, unsuccessful 18834, failed 0 00:21:25.558 11:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:25.558 11:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:25.558 11:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:21:25.558 11:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:25.558 11:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:25.558 11:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:25.558 11:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:25.558 11:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:25.558 11:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:25.558 11:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:25.817 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:27.723 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:27.723 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:27.723 00:21:27.723 real 0m12.962s 00:21:27.723 user 0m6.158s 00:21:27.723 sys 0m4.221s 00:21:27.723 11:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.723 ************************************ 00:21:27.723 END TEST kernel_target_abort 00:21:27.723 11:05:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.723 ************************************ 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:27.723 rmmod nvme_tcp 00:21:27.723 rmmod nvme_fabrics 00:21:27.723 rmmod nvme_keyring 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84052 ']' 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84052 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84052 ']' 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84052 00:21:27.723 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84052) - No such process 00:21:27.723 Process with pid 84052 is not found 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84052 is not found' 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:27.723 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:27.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:28.241 Waiting for block devices as requested 00:21:28.241 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:28.241 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:28.241 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:28.241 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:28.241 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:28.241 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:21:28.241 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:28.241 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:21:28.241 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:28.241 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:28.241 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:28.241 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:28.500 00:21:28.500 real 0m26.893s 00:21:28.500 user 0m49.069s 00:21:28.500 sys 0m7.722s 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.500 11:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:28.500 ************************************ 00:21:28.500 END TEST nvmf_abort_qd_sizes 00:21:28.500 ************************************ 00:21:28.760 11:05:15 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:28.760 11:05:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:28.760 11:05:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.760 11:05:15 -- common/autotest_common.sh@10 -- # set +x 00:21:28.760 ************************************ 00:21:28.760 START TEST keyring_file 00:21:28.760 ************************************ 00:21:28.760 11:05:15 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:28.761 * Looking for test storage... 00:21:28.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:28.761 11:05:15 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:28.761 11:05:15 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:21:28.761 11:05:15 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:28.761 11:05:15 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:28.761 11:05:15 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.761 11:05:15 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:28.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.761 --rc genhtml_branch_coverage=1 00:21:28.761 --rc genhtml_function_coverage=1 00:21:28.761 --rc genhtml_legend=1 00:21:28.761 --rc geninfo_all_blocks=1 00:21:28.761 --rc geninfo_unexecuted_blocks=1 00:21:28.761 00:21:28.761 ' 00:21:28.761 11:05:15 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:28.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.761 --rc genhtml_branch_coverage=1 00:21:28.761 --rc genhtml_function_coverage=1 00:21:28.761 --rc genhtml_legend=1 00:21:28.761 --rc geninfo_all_blocks=1 00:21:28.761 --rc geninfo_unexecuted_blocks=1 00:21:28.761 00:21:28.761 ' 00:21:28.761 11:05:15 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:28.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.761 --rc genhtml_branch_coverage=1 00:21:28.761 --rc genhtml_function_coverage=1 00:21:28.761 --rc genhtml_legend=1 00:21:28.761 --rc geninfo_all_blocks=1 00:21:28.761 --rc geninfo_unexecuted_blocks=1 00:21:28.761 00:21:28.761 ' 00:21:28.761 11:05:15 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:28.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.761 --rc genhtml_branch_coverage=1 00:21:28.761 --rc genhtml_function_coverage=1 00:21:28.761 --rc genhtml_legend=1 00:21:28.761 --rc geninfo_all_blocks=1 00:21:28.761 --rc geninfo_unexecuted_blocks=1 00:21:28.761 00:21:28.761 ' 00:21:28.761 11:05:15 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:28.761 11:05:15 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.761 11:05:15 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.761 11:05:15 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.761 11:05:15 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.761 11:05:15 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.761 11:05:15 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:28.761 11:05:15 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.761 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.761 11:05:15 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.761 11:05:15 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:28.761 11:05:15 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:28.762 11:05:15 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:28.762 11:05:15 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:28.762 11:05:15 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:28.762 11:05:15 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:28.762 11:05:15 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:28.762 11:05:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:28.762 11:05:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:28.762 11:05:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:28.762 11:05:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:29.021 11:05:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:29.021 11:05:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.apgt4hYQJA 00:21:29.021 11:05:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:29.021 11:05:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:29.021 11:05:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:29.021 11:05:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:29.021 11:05:15 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:29.021 11:05:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:29.021 11:05:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:29.021 11:05:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.apgt4hYQJA 00:21:29.021 11:05:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.apgt4hYQJA 00:21:29.022 11:05:15 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.apgt4hYQJA 00:21:29.022 11:05:15 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:29.022 11:05:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:29.022 11:05:15 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:29.022 11:05:15 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:29.022 11:05:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:29.022 11:05:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:29.022 11:05:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.w4wxvelCqp 00:21:29.022 11:05:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:29.022 11:05:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:29.022 11:05:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:29.022 11:05:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:29.022 11:05:15 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:29.022 11:05:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:29.022 11:05:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:29.022 11:05:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.w4wxvelCqp 00:21:29.022 11:05:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.w4wxvelCqp 00:21:29.022 11:05:15 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.w4wxvelCqp 00:21:29.022 11:05:15 keyring_file -- keyring/file.sh@30 -- # tgtpid=84963 00:21:29.022 11:05:15 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:29.022 11:05:15 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84963 00:21:29.022 11:05:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84963 ']' 00:21:29.022 11:05:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.022 11:05:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.022 11:05:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.022 11:05:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.022 11:05:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:29.022 [2024-11-15 11:05:15.818218] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:21:29.022 [2024-11-15 11:05:15.818328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84963 ] 00:21:29.281 [2024-11-15 11:05:15.969804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.281 [2024-11-15 11:05:16.047327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.540 [2024-11-15 11:05:16.145384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:30.109 11:05:16 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:30.109 [2024-11-15 11:05:16.851484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.109 null0 00:21:30.109 [2024-11-15 11:05:16.883427] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.109 [2024-11-15 11:05:16.883679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.109 11:05:16 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:30.109 [2024-11-15 11:05:16.915405] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:30.109 request: 00:21:30.109 { 00:21:30.109 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:30.109 "secure_channel": false, 00:21:30.109 "listen_address": { 00:21:30.109 "trtype": "tcp", 00:21:30.109 "traddr": "127.0.0.1", 00:21:30.109 "trsvcid": "4420" 00:21:30.109 }, 00:21:30.109 "method": "nvmf_subsystem_add_listener", 00:21:30.109 "req_id": 1 00:21:30.109 } 00:21:30.109 Got JSON-RPC error response 00:21:30.109 response: 00:21:30.109 { 00:21:30.109 "code": -32602, 00:21:30.109 "message": "Invalid parameters" 00:21:30.109 } 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:30.109 11:05:16 keyring_file -- keyring/file.sh@47 -- # bperfpid=84979 00:21:30.109 11:05:16 keyring_file -- keyring/file.sh@49 -- # waitforlisten 84979 /var/tmp/bperf.sock 00:21:30.109 11:05:16 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 84979 ']' 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.109 11:05:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:30.369 [2024-11-15 11:05:16.984066] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:21:30.369 [2024-11-15 11:05:16.984155] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84979 ] 00:21:30.369 [2024-11-15 11:05:17.131710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.369 [2024-11-15 11:05:17.185827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.628 [2024-11-15 11:05:17.243414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:30.628 11:05:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.628 11:05:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:30.628 11:05:17 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.apgt4hYQJA 00:21:30.628 11:05:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.apgt4hYQJA 00:21:30.888 11:05:17 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.w4wxvelCqp 00:21:30.888 11:05:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.w4wxvelCqp 00:21:31.147 11:05:17 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:31.147 11:05:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:31.147 11:05:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:31.147 11:05:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:31.147 11:05:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:31.406 11:05:18 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.apgt4hYQJA == \/\t\m\p\/\t\m\p\.\a\p\g\t\4\h\Y\Q\J\A ]] 00:21:31.406 11:05:18 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:31.406 11:05:18 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:31.406 11:05:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:31.406 11:05:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:31.406 11:05:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:31.665 11:05:18 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.w4wxvelCqp == \/\t\m\p\/\t\m\p\.\w\4\w\x\v\e\l\C\q\p ]] 00:21:31.665 11:05:18 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:31.665 11:05:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:31.665 11:05:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:31.665 11:05:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:31.665 11:05:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:31.665 11:05:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:31.924 11:05:18 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:31.924 11:05:18 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:31.924 11:05:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:31.924 11:05:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:31.924 11:05:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:31.924 11:05:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:31.924 11:05:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:32.491 11:05:19 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:32.491 11:05:19 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:32.491 11:05:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:32.491 [2024-11-15 11:05:19.320833] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.750 nvme0n1 00:21:32.750 11:05:19 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:32.750 11:05:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:32.750 11:05:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:32.750 11:05:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:32.750 11:05:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:32.750 11:05:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:33.009 11:05:19 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:33.009 11:05:19 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:33.009 11:05:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:33.009 11:05:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:33.009 11:05:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:33.009 11:05:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.009 11:05:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:33.268 11:05:19 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:33.268 11:05:19 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:33.268 Running I/O for 1 seconds... 00:21:34.204 11124.00 IOPS, 43.45 MiB/s 00:21:34.204 Latency(us) 00:21:34.204 [2024-11-15T11:05:21.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.204 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:34.204 nvme0n1 : 1.01 11173.38 43.65 0.00 0.00 11422.71 5183.30 24427.05 00:21:34.204 [2024-11-15T11:05:21.065Z] =================================================================================================================== 00:21:34.204 [2024-11-15T11:05:21.065Z] Total : 11173.38 43.65 0.00 0.00 11422.71 5183.30 24427.05 00:21:34.204 { 00:21:34.204 "results": [ 00:21:34.204 { 00:21:34.204 "job": "nvme0n1", 00:21:34.204 "core_mask": "0x2", 00:21:34.204 "workload": "randrw", 00:21:34.204 "percentage": 50, 00:21:34.205 "status": "finished", 00:21:34.205 "queue_depth": 128, 00:21:34.205 "io_size": 4096, 00:21:34.205 "runtime": 1.007215, 00:21:34.205 "iops": 11173.384034193296, 00:21:34.205 "mibps": 43.64603138356756, 00:21:34.205 "io_failed": 0, 00:21:34.205 "io_timeout": 0, 00:21:34.205 "avg_latency_us": 11422.710818294909, 00:21:34.205 "min_latency_us": 5183.301818181818, 00:21:34.205 "max_latency_us": 24427.054545454546 00:21:34.205 } 00:21:34.205 ], 00:21:34.205 "core_count": 1 00:21:34.205 } 00:21:34.205 11:05:21 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:34.205 11:05:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:34.463 11:05:21 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:34.463 11:05:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:34.463 11:05:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:34.463 11:05:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:34.463 11:05:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.463 11:05:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:35.031 11:05:21 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:35.031 11:05:21 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:35.031 11:05:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:35.031 11:05:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:35.031 11:05:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:35.031 11:05:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:35.031 11:05:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:35.290 11:05:21 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:35.291 11:05:21 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:35.291 11:05:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:35.291 11:05:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:35.291 11:05:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:35.291 11:05:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.291 11:05:21 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:35.291 11:05:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.291 11:05:21 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:35.291 11:05:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:35.291 [2024-11-15 11:05:22.139150] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:35.291 [2024-11-15 11:05:22.140129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8770 (107): Transport endpoint is not connected 00:21:35.291 [2024-11-15 11:05:22.141118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8770 (9): Bad file descriptor 00:21:35.291 [2024-11-15 11:05:22.142115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:35.291 [2024-11-15 11:05:22.142139] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:35.291 [2024-11-15 11:05:22.142149] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:35.291 [2024-11-15 11:05:22.142160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:35.291 request: 00:21:35.291 { 00:21:35.291 "name": "nvme0", 00:21:35.291 "trtype": "tcp", 00:21:35.291 "traddr": "127.0.0.1", 00:21:35.291 "adrfam": "ipv4", 00:21:35.291 "trsvcid": "4420", 00:21:35.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:35.291 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:35.291 "prchk_reftag": false, 00:21:35.291 "prchk_guard": false, 00:21:35.291 "hdgst": false, 00:21:35.291 "ddgst": false, 00:21:35.291 "psk": "key1", 00:21:35.291 "allow_unrecognized_csi": false, 00:21:35.291 "method": "bdev_nvme_attach_controller", 00:21:35.291 "req_id": 1 00:21:35.291 } 00:21:35.291 Got JSON-RPC error response 00:21:35.291 response: 00:21:35.291 { 00:21:35.291 "code": -5, 00:21:35.291 "message": "Input/output error" 00:21:35.291 } 00:21:35.549 11:05:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:35.549 11:05:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.549 11:05:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.549 11:05:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.549 11:05:22 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:35.549 11:05:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:35.549 11:05:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:35.549 11:05:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:35.549 11:05:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:35.549 11:05:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:35.808 11:05:22 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:35.808 11:05:22 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:35.808 11:05:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:35.808 11:05:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:35.808 11:05:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:35.808 11:05:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:35.808 11:05:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:36.068 11:05:22 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:36.068 11:05:22 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:36.068 11:05:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:36.327 11:05:23 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:36.327 11:05:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:36.586 11:05:23 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:36.586 11:05:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.586 11:05:23 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:36.845 11:05:23 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:36.845 11:05:23 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.apgt4hYQJA 00:21:36.845 11:05:23 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.apgt4hYQJA 00:21:36.845 11:05:23 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:36.845 11:05:23 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.apgt4hYQJA 00:21:36.845 11:05:23 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:36.845 11:05:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.845 11:05:23 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:36.845 11:05:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.845 11:05:23 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.apgt4hYQJA 00:21:36.845 11:05:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.apgt4hYQJA 00:21:37.103 [2024-11-15 11:05:23.865756] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.apgt4hYQJA': 0100660 00:21:37.103 [2024-11-15 11:05:23.865846] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:37.103 request: 00:21:37.103 { 00:21:37.103 "name": "key0", 00:21:37.103 "path": "/tmp/tmp.apgt4hYQJA", 00:21:37.103 "method": "keyring_file_add_key", 00:21:37.103 "req_id": 1 00:21:37.103 } 00:21:37.103 Got JSON-RPC error response 00:21:37.103 response: 00:21:37.103 { 00:21:37.103 "code": -1, 00:21:37.103 "message": "Operation not permitted" 00:21:37.103 } 00:21:37.103 11:05:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:37.103 11:05:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:37.103 11:05:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:37.103 11:05:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:37.103 11:05:23 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.apgt4hYQJA 00:21:37.103 11:05:23 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.apgt4hYQJA 00:21:37.103 11:05:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.apgt4hYQJA 00:21:37.362 11:05:24 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.apgt4hYQJA 00:21:37.362 11:05:24 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:37.362 11:05:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:37.362 11:05:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:37.362 11:05:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:37.362 11:05:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.362 11:05:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:37.623 11:05:24 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:37.623 11:05:24 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:37.882 11:05:24 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:21:37.882 11:05:24 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:37.882 11:05:24 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:37.882 11:05:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.882 11:05:24 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:37.882 11:05:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.882 11:05:24 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:37.882 11:05:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:38.141 [2024-11-15 11:05:24.762007] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.apgt4hYQJA': No such file or directory 00:21:38.142 [2024-11-15 11:05:24.762076] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:38.142 [2024-11-15 11:05:24.762099] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:38.142 [2024-11-15 11:05:24.762110] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:38.142 [2024-11-15 11:05:24.762123] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:38.142 [2024-11-15 11:05:24.762133] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:38.142 request: 00:21:38.142 { 00:21:38.142 "name": "nvme0", 00:21:38.142 "trtype": "tcp", 00:21:38.142 "traddr": "127.0.0.1", 00:21:38.142 "adrfam": "ipv4", 00:21:38.142 "trsvcid": "4420", 00:21:38.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:38.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:38.142 "prchk_reftag": false, 00:21:38.142 "prchk_guard": false, 00:21:38.142 "hdgst": false, 00:21:38.142 "ddgst": false, 00:21:38.142 "psk": "key0", 00:21:38.142 "allow_unrecognized_csi": false, 00:21:38.142 "method": "bdev_nvme_attach_controller", 00:21:38.142 "req_id": 1 00:21:38.142 } 00:21:38.142 Got JSON-RPC error response 00:21:38.142 response: 00:21:38.142 { 00:21:38.142 "code": -19, 00:21:38.142 "message": "No such device" 00:21:38.142 } 00:21:38.142 11:05:24 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:21:38.142 11:05:24 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.142 11:05:24 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.142 11:05:24 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.142 11:05:24 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:38.142 11:05:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:38.410 11:05:25 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:38.410 11:05:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:38.410 11:05:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:38.410 11:05:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:38.410 11:05:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:38.410 11:05:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:38.410 11:05:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nIwAjtuFKa 00:21:38.410 11:05:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:38.410 11:05:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:38.410 11:05:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:38.410 11:05:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:38.410 11:05:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:38.410 11:05:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:38.410 11:05:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:38.410 11:05:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nIwAjtuFKa 00:21:38.410 11:05:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nIwAjtuFKa 00:21:38.410 11:05:25 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.nIwAjtuFKa 00:21:38.410 11:05:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nIwAjtuFKa 00:21:38.410 11:05:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nIwAjtuFKa 00:21:38.668 11:05:25 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:38.668 11:05:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:38.926 nvme0n1 00:21:38.926 11:05:25 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:38.926 11:05:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:38.926 11:05:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:38.926 11:05:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:38.926 11:05:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:38.926 11:05:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:39.184 11:05:25 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:39.184 11:05:25 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:39.184 11:05:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:39.442 11:05:26 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:39.442 11:05:26 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:39.442 11:05:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:39.442 11:05:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:39.442 11:05:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:39.701 11:05:26 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:39.701 11:05:26 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:39.701 11:05:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:39.701 11:05:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:39.701 11:05:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:39.701 11:05:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:39.701 11:05:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:39.960 11:05:26 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:39.960 11:05:26 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:39.960 11:05:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:40.528 11:05:27 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:40.528 11:05:27 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:40.528 11:05:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:40.528 11:05:27 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:40.528 11:05:27 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nIwAjtuFKa 00:21:40.528 11:05:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nIwAjtuFKa 00:21:40.786 11:05:27 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.w4wxvelCqp 00:21:40.786 11:05:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.w4wxvelCqp 00:21:41.045 11:05:27 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:41.045 11:05:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:41.615 nvme0n1 00:21:41.615 11:05:28 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:41.615 11:05:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:41.875 11:05:28 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:41.875 "subsystems": [ 00:21:41.875 { 00:21:41.875 "subsystem": "keyring", 00:21:41.875 "config": [ 00:21:41.875 { 00:21:41.875 "method": "keyring_file_add_key", 00:21:41.875 "params": { 00:21:41.875 "name": "key0", 00:21:41.875 "path": "/tmp/tmp.nIwAjtuFKa" 00:21:41.875 } 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "method": "keyring_file_add_key", 00:21:41.875 "params": { 00:21:41.875 "name": "key1", 00:21:41.875 "path": "/tmp/tmp.w4wxvelCqp" 00:21:41.875 } 00:21:41.875 } 00:21:41.875 ] 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "subsystem": "iobuf", 00:21:41.875 "config": [ 00:21:41.875 { 00:21:41.875 "method": "iobuf_set_options", 00:21:41.875 "params": { 00:21:41.875 "small_pool_count": 8192, 00:21:41.875 "large_pool_count": 1024, 00:21:41.875 "small_bufsize": 8192, 00:21:41.875 "large_bufsize": 135168, 00:21:41.875 "enable_numa": false 00:21:41.875 } 00:21:41.875 } 00:21:41.875 ] 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "subsystem": "sock", 00:21:41.875 "config": [ 00:21:41.875 { 00:21:41.875 "method": "sock_set_default_impl", 00:21:41.875 "params": { 00:21:41.875 "impl_name": "uring" 00:21:41.875 } 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "method": "sock_impl_set_options", 00:21:41.875 "params": { 00:21:41.875 "impl_name": "ssl", 00:21:41.875 "recv_buf_size": 4096, 00:21:41.875 "send_buf_size": 4096, 00:21:41.875 "enable_recv_pipe": true, 00:21:41.875 "enable_quickack": false, 00:21:41.875 "enable_placement_id": 0, 00:21:41.875 "enable_zerocopy_send_server": true, 00:21:41.875 "enable_zerocopy_send_client": false, 00:21:41.875 "zerocopy_threshold": 0, 00:21:41.875 "tls_version": 0, 00:21:41.875 "enable_ktls": false 00:21:41.875 } 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "method": "sock_impl_set_options", 00:21:41.875 "params": { 00:21:41.875 "impl_name": "posix", 00:21:41.875 "recv_buf_size": 2097152, 00:21:41.875 "send_buf_size": 2097152, 00:21:41.875 "enable_recv_pipe": true, 00:21:41.875 "enable_quickack": false, 00:21:41.875 "enable_placement_id": 0, 00:21:41.875 "enable_zerocopy_send_server": true, 00:21:41.875 "enable_zerocopy_send_client": false, 00:21:41.875 "zerocopy_threshold": 0, 00:21:41.875 "tls_version": 0, 00:21:41.875 "enable_ktls": false 00:21:41.875 } 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "method": "sock_impl_set_options", 00:21:41.875 "params": { 00:21:41.875 "impl_name": "uring", 00:21:41.875 "recv_buf_size": 2097152, 00:21:41.875 "send_buf_size": 2097152, 00:21:41.875 "enable_recv_pipe": true, 00:21:41.875 "enable_quickack": false, 00:21:41.875 "enable_placement_id": 0, 00:21:41.875 "enable_zerocopy_send_server": false, 00:21:41.875 "enable_zerocopy_send_client": false, 00:21:41.875 "zerocopy_threshold": 0, 00:21:41.875 "tls_version": 0, 00:21:41.875 "enable_ktls": false 00:21:41.875 } 00:21:41.875 } 00:21:41.875 ] 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "subsystem": "vmd", 00:21:41.875 "config": [] 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "subsystem": "accel", 00:21:41.875 "config": [ 00:21:41.875 { 00:21:41.875 "method": "accel_set_options", 00:21:41.875 "params": { 00:21:41.875 "small_cache_size": 128, 00:21:41.875 "large_cache_size": 16, 00:21:41.875 "task_count": 2048, 00:21:41.875 "sequence_count": 2048, 00:21:41.875 "buf_count": 2048 00:21:41.875 } 00:21:41.875 } 00:21:41.875 ] 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "subsystem": "bdev", 00:21:41.875 "config": [ 00:21:41.875 { 00:21:41.875 "method": "bdev_set_options", 00:21:41.875 "params": { 00:21:41.875 "bdev_io_pool_size": 65535, 00:21:41.875 "bdev_io_cache_size": 256, 00:21:41.875 "bdev_auto_examine": true, 00:21:41.875 "iobuf_small_cache_size": 128, 00:21:41.875 "iobuf_large_cache_size": 16 00:21:41.875 } 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "method": "bdev_raid_set_options", 00:21:41.875 "params": { 00:21:41.875 "process_window_size_kb": 1024, 00:21:41.875 "process_max_bandwidth_mb_sec": 0 00:21:41.875 } 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "method": "bdev_iscsi_set_options", 00:21:41.875 "params": { 00:21:41.875 "timeout_sec": 30 00:21:41.875 } 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "method": "bdev_nvme_set_options", 00:21:41.875 "params": { 00:21:41.875 "action_on_timeout": "none", 00:21:41.875 "timeout_us": 0, 00:21:41.875 "timeout_admin_us": 0, 00:21:41.875 "keep_alive_timeout_ms": 10000, 00:21:41.875 "arbitration_burst": 0, 00:21:41.875 "low_priority_weight": 0, 00:21:41.875 "medium_priority_weight": 0, 00:21:41.875 "high_priority_weight": 0, 00:21:41.875 "nvme_adminq_poll_period_us": 10000, 00:21:41.875 "nvme_ioq_poll_period_us": 0, 00:21:41.875 "io_queue_requests": 512, 00:21:41.875 "delay_cmd_submit": true, 00:21:41.875 "transport_retry_count": 4, 00:21:41.875 "bdev_retry_count": 3, 00:21:41.875 "transport_ack_timeout": 0, 00:21:41.875 "ctrlr_loss_timeout_sec": 0, 00:21:41.875 "reconnect_delay_sec": 0, 00:21:41.875 "fast_io_fail_timeout_sec": 0, 00:21:41.875 "disable_auto_failback": false, 00:21:41.875 "generate_uuids": false, 00:21:41.875 "transport_tos": 0, 00:21:41.875 "nvme_error_stat": false, 00:21:41.875 "rdma_srq_size": 0, 00:21:41.875 "io_path_stat": false, 00:21:41.875 "allow_accel_sequence": false, 00:21:41.875 "rdma_max_cq_size": 0, 00:21:41.875 "rdma_cm_event_timeout_ms": 0, 00:21:41.875 "dhchap_digests": [ 00:21:41.875 "sha256", 00:21:41.875 "sha384", 00:21:41.875 "sha512" 00:21:41.875 ], 00:21:41.875 "dhchap_dhgroups": [ 00:21:41.875 "null", 00:21:41.875 "ffdhe2048", 00:21:41.875 "ffdhe3072", 00:21:41.875 "ffdhe4096", 00:21:41.875 "ffdhe6144", 00:21:41.875 "ffdhe8192" 00:21:41.875 ] 00:21:41.875 } 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "method": "bdev_nvme_attach_controller", 00:21:41.875 "params": { 00:21:41.875 "name": "nvme0", 00:21:41.875 "trtype": "TCP", 00:21:41.875 "adrfam": "IPv4", 00:21:41.875 "traddr": "127.0.0.1", 00:21:41.875 "trsvcid": "4420", 00:21:41.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:41.875 "prchk_reftag": false, 00:21:41.875 "prchk_guard": false, 00:21:41.875 "ctrlr_loss_timeout_sec": 0, 00:21:41.875 "reconnect_delay_sec": 0, 00:21:41.875 "fast_io_fail_timeout_sec": 0, 00:21:41.875 "psk": "key0", 00:21:41.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:41.875 "hdgst": false, 00:21:41.875 "ddgst": false, 00:21:41.875 "multipath": "multipath" 00:21:41.875 } 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "method": "bdev_nvme_set_hotplug", 00:21:41.875 "params": { 00:21:41.875 "period_us": 100000, 00:21:41.875 "enable": false 00:21:41.875 } 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "method": "bdev_wait_for_examine" 00:21:41.875 } 00:21:41.875 ] 00:21:41.875 }, 00:21:41.875 { 00:21:41.875 "subsystem": "nbd", 00:21:41.875 "config": [] 00:21:41.875 } 00:21:41.875 ] 00:21:41.875 }' 00:21:41.875 11:05:28 keyring_file -- keyring/file.sh@115 -- # killprocess 84979 00:21:41.875 11:05:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84979 ']' 00:21:41.875 11:05:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84979 00:21:41.875 11:05:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:41.876 11:05:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.876 11:05:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84979 00:21:41.876 11:05:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:41.876 11:05:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:41.876 killing process with pid 84979 00:21:41.876 11:05:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84979' 00:21:41.876 Received shutdown signal, test time was about 1.000000 seconds 00:21:41.876 00:21:41.876 Latency(us) 00:21:41.876 [2024-11-15T11:05:28.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.876 [2024-11-15T11:05:28.737Z] =================================================================================================================== 00:21:41.876 [2024-11-15T11:05:28.737Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.876 11:05:28 keyring_file -- common/autotest_common.sh@973 -- # kill 84979 00:21:41.876 11:05:28 keyring_file -- common/autotest_common.sh@978 -- # wait 84979 00:21:42.135 11:05:28 keyring_file -- keyring/file.sh@118 -- # bperfpid=85231 00:21:42.135 11:05:28 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85231 /var/tmp/bperf.sock 00:21:42.135 11:05:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85231 ']' 00:21:42.135 11:05:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:42.135 11:05:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.135 11:05:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:42.135 11:05:28 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:42.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:42.135 11:05:28 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:42.135 "subsystems": [ 00:21:42.135 { 00:21:42.135 "subsystem": "keyring", 00:21:42.135 "config": [ 00:21:42.135 { 00:21:42.135 "method": "keyring_file_add_key", 00:21:42.135 "params": { 00:21:42.135 "name": "key0", 00:21:42.135 "path": "/tmp/tmp.nIwAjtuFKa" 00:21:42.135 } 00:21:42.135 }, 00:21:42.135 { 00:21:42.135 "method": "keyring_file_add_key", 00:21:42.135 "params": { 00:21:42.135 "name": "key1", 00:21:42.135 "path": "/tmp/tmp.w4wxvelCqp" 00:21:42.135 } 00:21:42.135 } 00:21:42.135 ] 00:21:42.135 }, 00:21:42.135 { 00:21:42.135 "subsystem": "iobuf", 00:21:42.135 "config": [ 00:21:42.135 { 00:21:42.135 "method": "iobuf_set_options", 00:21:42.135 "params": { 00:21:42.135 "small_pool_count": 8192, 00:21:42.135 "large_pool_count": 1024, 00:21:42.135 "small_bufsize": 8192, 00:21:42.135 "large_bufsize": 135168, 00:21:42.135 "enable_numa": false 00:21:42.135 } 00:21:42.135 } 00:21:42.135 ] 00:21:42.135 }, 00:21:42.135 { 00:21:42.135 "subsystem": "sock", 00:21:42.135 "config": [ 00:21:42.135 { 00:21:42.135 "method": "sock_set_default_impl", 00:21:42.135 "params": { 00:21:42.135 "impl_name": "uring" 00:21:42.135 } 00:21:42.135 }, 00:21:42.135 { 00:21:42.135 "method": "sock_impl_set_options", 00:21:42.135 "params": { 00:21:42.135 "impl_name": "ssl", 00:21:42.135 "recv_buf_size": 4096, 00:21:42.135 "send_buf_size": 4096, 00:21:42.135 "enable_recv_pipe": true, 00:21:42.135 "enable_quickack": false, 00:21:42.135 "enable_placement_id": 0, 00:21:42.135 "enable_zerocopy_send_server": true, 00:21:42.135 "enable_zerocopy_send_client": false, 00:21:42.135 "zerocopy_threshold": 0, 00:21:42.135 "tls_version": 0, 00:21:42.135 "enable_ktls": false 00:21:42.135 } 00:21:42.135 }, 00:21:42.135 { 00:21:42.135 "method": "sock_impl_set_options", 00:21:42.135 "params": { 00:21:42.135 "impl_name": "posix", 00:21:42.135 "recv_buf_size": 2097152, 00:21:42.135 "send_buf_size": 2097152, 00:21:42.135 "enable_recv_pipe": true, 00:21:42.135 "enable_quickack": false, 00:21:42.135 "enable_placement_id": 0, 00:21:42.135 "enable_zerocopy_send_server": true, 00:21:42.135 "enable_zerocopy_send_client": false, 00:21:42.135 "zerocopy_threshold": 0, 00:21:42.135 "tls_version": 0, 00:21:42.135 "enable_ktls": false 00:21:42.135 } 00:21:42.135 }, 00:21:42.135 { 00:21:42.135 "method": "sock_impl_set_options", 00:21:42.135 "params": { 00:21:42.135 "impl_name": "uring", 00:21:42.135 "recv_buf_size": 2097152, 00:21:42.135 "send_buf_size": 2097152, 00:21:42.135 "enable_recv_pipe": true, 00:21:42.135 "enable_quickack": false, 00:21:42.135 "enable_placement_id": 0, 00:21:42.135 "enable_zerocopy_send_server": false, 00:21:42.135 "enable_zerocopy_send_client": false, 00:21:42.135 "zerocopy_threshold": 0, 00:21:42.135 "tls_version": 0, 00:21:42.135 "enable_ktls": false 00:21:42.135 } 00:21:42.135 } 00:21:42.135 ] 00:21:42.135 }, 00:21:42.135 { 00:21:42.135 "subsystem": "vmd", 00:21:42.135 "config": [] 00:21:42.135 }, 00:21:42.135 { 00:21:42.135 "subsystem": "accel", 00:21:42.135 "config": [ 00:21:42.135 { 00:21:42.135 "method": "accel_set_options", 00:21:42.135 "params": { 00:21:42.135 "small_cache_size": 128, 00:21:42.135 "large_cache_size": 16, 00:21:42.135 "task_count": 2048, 00:21:42.135 "sequence_count": 2048, 00:21:42.135 "buf_count": 2048 00:21:42.135 } 00:21:42.135 } 00:21:42.135 ] 00:21:42.135 }, 00:21:42.135 { 00:21:42.135 "subsystem": "bdev", 00:21:42.135 "config": [ 00:21:42.135 { 00:21:42.135 "method": "bdev_set_options", 00:21:42.135 "params": { 00:21:42.135 "bdev_io_pool_size": 65535, 00:21:42.135 "bdev_io_cache_size": 256, 00:21:42.135 "bdev_auto_examine": true, 00:21:42.135 "iobuf_small_cache_size": 128, 00:21:42.135 "iobuf_large_cache_size": 16 00:21:42.135 } 00:21:42.135 }, 00:21:42.135 { 00:21:42.135 "method": "bdev_raid_set_options", 00:21:42.135 "params": { 00:21:42.135 "process_window_size_kb": 1024, 00:21:42.135 "process_max_bandwidth_mb_sec": 0 00:21:42.136 } 00:21:42.136 }, 00:21:42.136 { 00:21:42.136 "method": "bdev_iscsi_set_options", 00:21:42.136 "params": { 00:21:42.136 "timeout_sec": 30 00:21:42.136 } 00:21:42.136 }, 00:21:42.136 { 00:21:42.136 "method": "bdev_nvme_set_options", 00:21:42.136 "params": { 00:21:42.136 "action_on_timeout": "none", 00:21:42.136 "timeout_us": 0, 00:21:42.136 "timeout_admin_us": 0, 00:21:42.136 "keep_alive_timeout_ms": 10000, 00:21:42.136 "arbitration_burst": 0, 00:21:42.136 "low_priority_weight": 0, 00:21:42.136 "medium_priority_weight": 0, 00:21:42.136 "high_priority_weight": 0, 00:21:42.136 "nvme_adminq_poll_period_us": 10000, 00:21:42.136 "nvme_ioq_poll_period_us": 0, 00:21:42.136 "io_queue_requests": 512, 00:21:42.136 "delay_cmd_submit": true, 00:21:42.136 "transport_retry_count": 4, 00:21:42.136 "bdev_retry_count": 3, 00:21:42.136 "transport_ack_timeout": 0, 00:21:42.136 "ctrlr_loss_timeout_sec": 0, 00:21:42.136 "reconnect_delay_sec": 0, 00:21:42.136 "fast_io_fail_timeout_sec": 0, 00:21:42.136 "disable_auto_failback": false, 00:21:42.136 "generate_uuids": false, 00:21:42.136 "transport_tos": 0, 00:21:42.136 "nvme_error_stat": false, 00:21:42.136 "rdma_srq_size": 0, 00:21:42.136 "io_path_stat": false, 00:21:42.136 "allow_accel_sequence": false, 00:21:42.136 "rdma_max_cq_size": 0, 00:21:42.136 "rdma_cm_event_timeout_ms": 0, 00:21:42.136 "dhchap_digests": [ 00:21:42.136 "sha256", 00:21:42.136 "sha384", 00:21:42.136 "sha512" 00:21:42.136 ], 00:21:42.136 "dhchap_dhgroups": [ 00:21:42.136 "null", 00:21:42.136 "ffdhe2048", 00:21:42.136 "ffdhe3072", 00:21:42.136 "ffdhe4096", 00:21:42.136 "ffdhe6144", 00:21:42.136 "ffdhe8192" 00:21:42.136 ] 00:21:42.136 } 00:21:42.136 }, 00:21:42.136 { 00:21:42.136 "method": "bdev_nvme_attach_controller", 00:21:42.136 "params": { 00:21:42.136 "name": "nvme0", 00:21:42.136 "trtype": "TCP", 00:21:42.136 "adrfam": "IPv4", 00:21:42.136 "traddr": "127.0.0.1", 00:21:42.136 "trsvcid": "4420", 00:21:42.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:42.136 "prchk_reftag": false, 00:21:42.136 "prchk_guard": false, 00:21:42.136 "ctrlr_loss_timeout_sec": 0, 00:21:42.136 "reconnect_delay_sec": 0, 00:21:42.136 "fast_io_fail_timeout_sec": 0, 00:21:42.136 "psk": "key0", 00:21:42.136 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:42.136 "hdgst": false, 00:21:42.136 "ddgst": false, 00:21:42.136 "multipath": "multipath" 00:21:42.136 } 00:21:42.136 }, 00:21:42.136 { 00:21:42.136 "method": "bdev_nvme_set_hotplug", 00:21:42.136 "params": { 00:21:42.136 "period_us": 100000, 00:21:42.136 "enable": false 00:21:42.136 } 00:21:42.136 }, 00:21:42.136 { 00:21:42.136 "method": "bdev_wait_for_examine" 00:21:42.136 } 00:21:42.136 ] 00:21:42.136 }, 00:21:42.136 { 00:21:42.136 "subsystem": "nbd", 00:21:42.136 "config": [] 00:21:42.136 } 00:21:42.136 ] 00:21:42.136 }' 00:21:42.136 11:05:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.136 11:05:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:42.136 [2024-11-15 11:05:28.888493] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:21:42.136 [2024-11-15 11:05:28.888583] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85231 ] 00:21:42.395 [2024-11-15 11:05:29.030523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.395 [2024-11-15 11:05:29.084269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.395 [2024-11-15 11:05:29.237511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:42.653 [2024-11-15 11:05:29.303941] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.219 11:05:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.219 11:05:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:21:43.219 11:05:29 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:43.219 11:05:29 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:43.219 11:05:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:43.477 11:05:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:43.477 11:05:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:43.477 11:05:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:43.477 11:05:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:43.477 11:05:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:43.477 11:05:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:43.477 11:05:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:43.735 11:05:30 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:43.735 11:05:30 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:43.735 11:05:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:43.735 11:05:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:43.735 11:05:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:43.735 11:05:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:43.735 11:05:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:43.993 11:05:30 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:43.993 11:05:30 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:43.993 11:05:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:43.993 11:05:30 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:44.253 11:05:31 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:44.253 11:05:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:44.253 11:05:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nIwAjtuFKa /tmp/tmp.w4wxvelCqp 00:21:44.253 11:05:31 keyring_file -- keyring/file.sh@20 -- # killprocess 85231 00:21:44.253 11:05:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85231 ']' 00:21:44.253 11:05:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85231 00:21:44.253 11:05:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:44.253 11:05:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.253 11:05:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85231 00:21:44.253 11:05:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:44.253 11:05:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:44.512 killing process with pid 85231 00:21:44.512 11:05:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85231' 00:21:44.512 11:05:31 keyring_file -- common/autotest_common.sh@973 -- # kill 85231 00:21:44.512 Received shutdown signal, test time was about 1.000000 seconds 00:21:44.512 00:21:44.512 Latency(us) 00:21:44.512 [2024-11-15T11:05:31.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.512 [2024-11-15T11:05:31.373Z] =================================================================================================================== 00:21:44.512 [2024-11-15T11:05:31.373Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:44.512 11:05:31 keyring_file -- common/autotest_common.sh@978 -- # wait 85231 00:21:44.512 11:05:31 keyring_file -- keyring/file.sh@21 -- # killprocess 84963 00:21:44.512 11:05:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 84963 ']' 00:21:44.512 11:05:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 84963 00:21:44.512 11:05:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:21:44.771 11:05:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.771 11:05:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84963 00:21:44.771 11:05:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:44.771 11:05:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:44.771 killing process with pid 84963 00:21:44.771 11:05:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84963' 00:21:44.771 11:05:31 keyring_file -- common/autotest_common.sh@973 -- # kill 84963 00:21:44.771 11:05:31 keyring_file -- common/autotest_common.sh@978 -- # wait 84963 00:21:45.339 00:21:45.339 real 0m16.524s 00:21:45.339 user 0m41.043s 00:21:45.339 sys 0m3.234s 00:21:45.339 11:05:31 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.339 ************************************ 00:21:45.339 END TEST keyring_file 00:21:45.339 11:05:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 ************************************ 00:21:45.339 11:05:31 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:21:45.339 11:05:31 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:45.339 11:05:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:45.339 11:05:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.339 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:21:45.339 ************************************ 00:21:45.339 START TEST keyring_linux 00:21:45.339 ************************************ 00:21:45.339 11:05:31 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:45.339 Joined session keyring: 827404228 00:21:45.339 * Looking for test storage... 00:21:45.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:45.339 11:05:32 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:45.339 11:05:32 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:21:45.339 11:05:32 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:45.339 11:05:32 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.339 11:05:32 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:45.339 11:05:32 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.339 11:05:32 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:45.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.339 --rc genhtml_branch_coverage=1 00:21:45.339 --rc genhtml_function_coverage=1 00:21:45.340 --rc genhtml_legend=1 00:21:45.340 --rc geninfo_all_blocks=1 00:21:45.340 --rc geninfo_unexecuted_blocks=1 00:21:45.340 00:21:45.340 ' 00:21:45.340 11:05:32 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:45.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.340 --rc genhtml_branch_coverage=1 00:21:45.340 --rc genhtml_function_coverage=1 00:21:45.340 --rc genhtml_legend=1 00:21:45.340 --rc geninfo_all_blocks=1 00:21:45.340 --rc geninfo_unexecuted_blocks=1 00:21:45.340 00:21:45.340 ' 00:21:45.340 11:05:32 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:45.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.340 --rc genhtml_branch_coverage=1 00:21:45.340 --rc genhtml_function_coverage=1 00:21:45.340 --rc genhtml_legend=1 00:21:45.340 --rc geninfo_all_blocks=1 00:21:45.340 --rc geninfo_unexecuted_blocks=1 00:21:45.340 00:21:45.340 ' 00:21:45.340 11:05:32 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:45.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.340 --rc genhtml_branch_coverage=1 00:21:45.340 --rc genhtml_function_coverage=1 00:21:45.340 --rc genhtml_legend=1 00:21:45.340 --rc geninfo_all_blocks=1 00:21:45.340 --rc geninfo_unexecuted_blocks=1 00:21:45.340 00:21:45.340 ' 00:21:45.340 11:05:32 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:45.340 11:05:32 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=02f14d39-9b07-4abc-bc4a-e88d43a336ca 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:45.340 11:05:32 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.340 11:05:32 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.340 11:05:32 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.340 11:05:32 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.340 11:05:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.340 11:05:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.340 11:05:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.340 11:05:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:45.340 11:05:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.340 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.340 11:05:32 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.340 11:05:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:45.340 11:05:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:45.340 11:05:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:45.340 11:05:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:45.621 11:05:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:45.621 11:05:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:45.621 11:05:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:45.621 /tmp/:spdk-test:key0 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:45.621 11:05:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:45.621 11:05:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:45.621 /tmp/:spdk-test:key1 00:21:45.621 11:05:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:45.621 11:05:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85358 00:21:45.621 11:05:32 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:45.621 11:05:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85358 00:21:45.621 11:05:32 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85358 ']' 00:21:45.621 11:05:32 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.621 11:05:32 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.621 11:05:32 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.621 11:05:32 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.621 11:05:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:45.621 [2024-11-15 11:05:32.352306] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:21:45.621 [2024-11-15 11:05:32.352402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85358 ] 00:21:45.925 [2024-11-15 11:05:32.491778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.925 [2024-11-15 11:05:32.556139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.925 [2024-11-15 11:05:32.650872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:46.184 11:05:32 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.184 11:05:32 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:46.184 11:05:32 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:46.184 11:05:32 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.184 11:05:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:46.184 [2024-11-15 11:05:32.913147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.184 null0 00:21:46.184 [2024-11-15 11:05:32.945105] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:46.184 [2024-11-15 11:05:32.945315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:46.184 11:05:32 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.184 11:05:32 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:46.184 854865272 00:21:46.184 11:05:32 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:46.184 729648814 00:21:46.184 11:05:32 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85373 00:21:46.184 11:05:32 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:46.184 11:05:32 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85373 /var/tmp/bperf.sock 00:21:46.184 11:05:32 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85373 ']' 00:21:46.184 11:05:32 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:46.185 11:05:32 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:46.185 11:05:32 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:46.185 11:05:32 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.185 11:05:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:46.185 [2024-11-15 11:05:33.033997] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:21:46.185 [2024-11-15 11:05:33.034096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85373 ] 00:21:46.444 [2024-11-15 11:05:33.186936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.444 [2024-11-15 11:05:33.251176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.379 11:05:34 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.379 11:05:34 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:21:47.379 11:05:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:47.379 11:05:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:47.639 11:05:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:47.639 11:05:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:47.898 [2024-11-15 11:05:34.540480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:47.898 11:05:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:47.898 11:05:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:48.157 [2024-11-15 11:05:34.902216] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.157 nvme0n1 00:21:48.157 11:05:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:48.157 11:05:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:48.157 11:05:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:48.157 11:05:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:48.157 11:05:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:48.157 11:05:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.416 11:05:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:48.416 11:05:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:48.416 11:05:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:48.416 11:05:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:48.416 11:05:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:48.416 11:05:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.416 11:05:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:48.675 11:05:35 keyring_linux -- keyring/linux.sh@25 -- # sn=854865272 00:21:48.675 11:05:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:48.675 11:05:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:48.675 11:05:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 854865272 == \8\5\4\8\6\5\2\7\2 ]] 00:21:48.675 11:05:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 854865272 00:21:48.675 11:05:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:48.675 11:05:35 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:48.934 Running I/O for 1 seconds... 00:21:49.871 13375.00 IOPS, 52.25 MiB/s 00:21:49.871 Latency(us) 00:21:49.871 [2024-11-15T11:05:36.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.871 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:49.871 nvme0n1 : 1.01 13359.19 52.18 0.00 0.00 9525.66 4259.84 13762.56 00:21:49.871 [2024-11-15T11:05:36.732Z] =================================================================================================================== 00:21:49.871 [2024-11-15T11:05:36.732Z] Total : 13359.19 52.18 0.00 0.00 9525.66 4259.84 13762.56 00:21:49.871 { 00:21:49.871 "results": [ 00:21:49.871 { 00:21:49.871 "job": "nvme0n1", 00:21:49.871 "core_mask": "0x2", 00:21:49.871 "workload": "randread", 00:21:49.871 "status": "finished", 00:21:49.871 "queue_depth": 128, 00:21:49.871 "io_size": 4096, 00:21:49.871 "runtime": 1.01084, 00:21:49.871 "iops": 13359.186419215703, 00:21:49.871 "mibps": 52.18432195006134, 00:21:49.871 "io_failed": 0, 00:21:49.871 "io_timeout": 0, 00:21:49.871 "avg_latency_us": 9525.661835415769, 00:21:49.871 "min_latency_us": 4259.84, 00:21:49.871 "max_latency_us": 13762.56 00:21:49.871 } 00:21:49.871 ], 00:21:49.871 "core_count": 1 00:21:49.871 } 00:21:49.871 11:05:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:49.871 11:05:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:50.439 11:05:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:50.439 11:05:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:50.439 11:05:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:50.439 11:05:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:50.439 11:05:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:50.439 11:05:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:50.439 11:05:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:50.439 11:05:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:50.439 11:05:37 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:50.439 11:05:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:50.439 11:05:37 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:21:50.439 11:05:37 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:50.439 11:05:37 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:21:50.439 11:05:37 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.439 11:05:37 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:21:50.439 11:05:37 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:50.439 11:05:37 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:50.439 11:05:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:50.697 [2024-11-15 11:05:37.540423] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:50.697 [2024-11-15 11:05:37.540996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18435d0 (107): Transport endpoint is not connected 00:21:50.697 [2024-11-15 11:05:37.541983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18435d0 (9): Bad file descriptor 00:21:50.697 [2024-11-15 11:05:37.542988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:50.697 [2024-11-15 11:05:37.543204] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:50.698 [2024-11-15 11:05:37.543248] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:50.698 [2024-11-15 11:05:37.543261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:50.698 request: 00:21:50.698 { 00:21:50.698 "name": "nvme0", 00:21:50.698 "trtype": "tcp", 00:21:50.698 "traddr": "127.0.0.1", 00:21:50.698 "adrfam": "ipv4", 00:21:50.698 "trsvcid": "4420", 00:21:50.698 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:50.698 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:50.698 "prchk_reftag": false, 00:21:50.698 "prchk_guard": false, 00:21:50.698 "hdgst": false, 00:21:50.698 "ddgst": false, 00:21:50.698 "psk": ":spdk-test:key1", 00:21:50.698 "allow_unrecognized_csi": false, 00:21:50.698 "method": "bdev_nvme_attach_controller", 00:21:50.698 "req_id": 1 00:21:50.698 } 00:21:50.698 Got JSON-RPC error response 00:21:50.698 response: 00:21:50.698 { 00:21:50.698 "code": -5, 00:21:50.698 "message": "Input/output error" 00:21:50.698 } 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@33 -- # sn=854865272 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 854865272 00:21:50.957 1 links removed 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@33 -- # sn=729648814 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 729648814 00:21:50.957 1 links removed 00:21:50.957 11:05:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85373 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85373 ']' 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85373 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85373 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:50.957 killing process with pid 85373 00:21:50.957 Received shutdown signal, test time was about 1.000000 seconds 00:21:50.957 00:21:50.957 Latency(us) 00:21:50.957 [2024-11-15T11:05:37.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.957 [2024-11-15T11:05:37.818Z] =================================================================================================================== 00:21:50.957 [2024-11-15T11:05:37.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85373' 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 85373 00:21:50.957 11:05:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 85373 00:21:51.216 11:05:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85358 00:21:51.216 11:05:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85358 ']' 00:21:51.216 11:05:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85358 00:21:51.216 11:05:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:21:51.216 11:05:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.216 11:05:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85358 00:21:51.216 killing process with pid 85358 00:21:51.216 11:05:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:51.216 11:05:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:51.216 11:05:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85358' 00:21:51.216 11:05:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 85358 00:21:51.216 11:05:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 85358 00:21:51.784 ************************************ 00:21:51.784 END TEST keyring_linux 00:21:51.784 ************************************ 00:21:51.784 00:21:51.784 real 0m6.481s 00:21:51.784 user 0m12.644s 00:21:51.784 sys 0m1.687s 00:21:51.784 11:05:38 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.784 11:05:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:51.784 11:05:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:51.784 11:05:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:51.784 11:05:38 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:51.784 11:05:38 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:51.784 11:05:38 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:51.784 11:05:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:51.784 11:05:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:51.784 11:05:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:51.784 11:05:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:51.784 11:05:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:51.784 11:05:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:51.784 11:05:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:51.784 11:05:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:51.784 11:05:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:51.784 11:05:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:51.784 11:05:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:51.784 11:05:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:51.784 11:05:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.784 11:05:38 -- common/autotest_common.sh@10 -- # set +x 00:21:51.784 11:05:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:51.784 11:05:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:51.784 11:05:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:51.784 11:05:38 -- common/autotest_common.sh@10 -- # set +x 00:21:53.690 INFO: APP EXITING 00:21:53.690 INFO: killing all VMs 00:21:53.690 INFO: killing vhost app 00:21:53.690 INFO: EXIT DONE 00:21:54.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:54.257 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:54.516 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:55.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:55.083 Cleaning 00:21:55.083 Removing: /var/run/dpdk/spdk0/config 00:21:55.083 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:55.083 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:55.083 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:55.083 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:55.083 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:55.083 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:55.083 Removing: /var/run/dpdk/spdk1/config 00:21:55.083 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:55.083 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:55.083 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:55.083 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:55.083 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:55.083 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:55.083 Removing: /var/run/dpdk/spdk2/config 00:21:55.083 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:55.083 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:55.083 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:55.083 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:55.083 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:55.083 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:55.083 Removing: /var/run/dpdk/spdk3/config 00:21:55.083 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:55.083 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:55.083 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:55.083 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:55.083 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:55.083 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:55.083 Removing: /var/run/dpdk/spdk4/config 00:21:55.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:55.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:55.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:55.083 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:55.083 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:55.083 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:55.083 Removing: /dev/shm/nvmf_trace.0 00:21:55.342 Removing: /dev/shm/spdk_tgt_trace.pid56647 00:21:55.342 Removing: /var/run/dpdk/spdk0 00:21:55.342 Removing: /var/run/dpdk/spdk1 00:21:55.342 Removing: /var/run/dpdk/spdk2 00:21:55.342 Removing: /var/run/dpdk/spdk3 00:21:55.342 Removing: /var/run/dpdk/spdk4 00:21:55.342 Removing: /var/run/dpdk/spdk_pid56494 00:21:55.342 Removing: /var/run/dpdk/spdk_pid56647 00:21:55.342 Removing: /var/run/dpdk/spdk_pid56846 00:21:55.342 Removing: /var/run/dpdk/spdk_pid56928 00:21:55.342 Removing: /var/run/dpdk/spdk_pid56960 00:21:55.342 Removing: /var/run/dpdk/spdk_pid57069 00:21:55.342 Removing: /var/run/dpdk/spdk_pid57087 00:21:55.342 Removing: /var/run/dpdk/spdk_pid57221 00:21:55.342 Removing: /var/run/dpdk/spdk_pid57417 00:21:55.342 Removing: /var/run/dpdk/spdk_pid57565 00:21:55.342 Removing: /var/run/dpdk/spdk_pid57643 00:21:55.342 Removing: /var/run/dpdk/spdk_pid57720 00:21:55.342 Removing: /var/run/dpdk/spdk_pid57806 00:21:55.342 Removing: /var/run/dpdk/spdk_pid57883 00:21:55.342 Removing: /var/run/dpdk/spdk_pid57922 00:21:55.342 Removing: /var/run/dpdk/spdk_pid57952 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58021 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58108 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58541 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58591 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58629 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58643 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58710 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58713 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58780 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58790 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58834 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58850 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58890 00:21:55.342 Removing: /var/run/dpdk/spdk_pid58901 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59037 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59072 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59155 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59481 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59499 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59530 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59549 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59564 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59583 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59601 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59618 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59637 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59656 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59677 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59696 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59709 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59725 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59744 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59763 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59784 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59803 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59811 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59832 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59868 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59881 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59911 00:21:55.342 Removing: /var/run/dpdk/spdk_pid59983 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60017 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60021 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60055 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60070 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60072 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60120 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60128 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60162 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60177 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60183 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60199 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60203 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60218 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60228 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60237 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60271 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60292 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60307 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60339 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60345 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60358 00:21:55.342 Removing: /var/run/dpdk/spdk_pid60399 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60410 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60443 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60446 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60459 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60467 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60474 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60482 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60489 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60502 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60579 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60626 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60739 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60777 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60823 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60837 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60859 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60874 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60911 00:21:55.600 Removing: /var/run/dpdk/spdk_pid60932 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61011 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61029 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61073 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61148 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61204 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61241 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61342 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61384 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61422 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61649 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61746 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61780 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61804 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61843 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61877 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61910 00:21:55.600 Removing: /var/run/dpdk/spdk_pid61947 00:21:55.600 Removing: /var/run/dpdk/spdk_pid62334 00:21:55.600 Removing: /var/run/dpdk/spdk_pid62372 00:21:55.600 Removing: /var/run/dpdk/spdk_pid62722 00:21:55.600 Removing: /var/run/dpdk/spdk_pid63174 00:21:55.600 Removing: /var/run/dpdk/spdk_pid63445 00:21:55.600 Removing: /var/run/dpdk/spdk_pid64331 00:21:55.600 Removing: /var/run/dpdk/spdk_pid65247 00:21:55.600 Removing: /var/run/dpdk/spdk_pid65370 00:21:55.600 Removing: /var/run/dpdk/spdk_pid65432 00:21:55.600 Removing: /var/run/dpdk/spdk_pid66853 00:21:55.600 Removing: /var/run/dpdk/spdk_pid67166 00:21:55.600 Removing: /var/run/dpdk/spdk_pid70812 00:21:55.600 Removing: /var/run/dpdk/spdk_pid71165 00:21:55.600 Removing: /var/run/dpdk/spdk_pid71274 00:21:55.600 Removing: /var/run/dpdk/spdk_pid71413 00:21:55.600 Removing: /var/run/dpdk/spdk_pid71434 00:21:55.600 Removing: /var/run/dpdk/spdk_pid71464 00:21:55.600 Removing: /var/run/dpdk/spdk_pid71485 00:21:55.600 Removing: /var/run/dpdk/spdk_pid71576 00:21:55.600 Removing: /var/run/dpdk/spdk_pid71700 00:21:55.600 Removing: /var/run/dpdk/spdk_pid71830 00:21:55.600 Removing: /var/run/dpdk/spdk_pid71917 00:21:55.600 Removing: /var/run/dpdk/spdk_pid72098 00:21:55.600 Removing: /var/run/dpdk/spdk_pid72174 00:21:55.600 Removing: /var/run/dpdk/spdk_pid72259 00:21:55.600 Removing: /var/run/dpdk/spdk_pid72614 00:21:55.600 Removing: /var/run/dpdk/spdk_pid73017 00:21:55.600 Removing: /var/run/dpdk/spdk_pid73018 00:21:55.600 Removing: /var/run/dpdk/spdk_pid73019 00:21:55.600 Removing: /var/run/dpdk/spdk_pid73281 00:21:55.600 Removing: /var/run/dpdk/spdk_pid73538 00:21:55.600 Removing: /var/run/dpdk/spdk_pid73925 00:21:55.600 Removing: /var/run/dpdk/spdk_pid73933 00:21:55.600 Removing: /var/run/dpdk/spdk_pid74249 00:21:55.600 Removing: /var/run/dpdk/spdk_pid74268 00:21:55.600 Removing: /var/run/dpdk/spdk_pid74283 00:21:55.600 Removing: /var/run/dpdk/spdk_pid74308 00:21:55.600 Removing: /var/run/dpdk/spdk_pid74323 00:21:55.600 Removing: /var/run/dpdk/spdk_pid74667 00:21:55.600 Removing: /var/run/dpdk/spdk_pid74710 00:21:55.600 Removing: /var/run/dpdk/spdk_pid75040 00:21:55.600 Removing: /var/run/dpdk/spdk_pid75230 00:21:55.600 Removing: /var/run/dpdk/spdk_pid75649 00:21:55.600 Removing: /var/run/dpdk/spdk_pid76196 00:21:55.600 Removing: /var/run/dpdk/spdk_pid77097 00:21:55.600 Removing: /var/run/dpdk/spdk_pid77726 00:21:55.858 Removing: /var/run/dpdk/spdk_pid77738 00:21:55.858 Removing: /var/run/dpdk/spdk_pid79745 00:21:55.858 Removing: /var/run/dpdk/spdk_pid79792 00:21:55.858 Removing: /var/run/dpdk/spdk_pid79850 00:21:55.858 Removing: /var/run/dpdk/spdk_pid79898 00:21:55.858 Removing: /var/run/dpdk/spdk_pid80007 00:21:55.858 Removing: /var/run/dpdk/spdk_pid80060 00:21:55.858 Removing: /var/run/dpdk/spdk_pid80119 00:21:55.858 Removing: /var/run/dpdk/spdk_pid80173 00:21:55.858 Removing: /var/run/dpdk/spdk_pid80531 00:21:55.858 Removing: /var/run/dpdk/spdk_pid81734 00:21:55.858 Removing: /var/run/dpdk/spdk_pid81873 00:21:55.858 Removing: /var/run/dpdk/spdk_pid82108 00:21:55.858 Removing: /var/run/dpdk/spdk_pid82710 00:21:55.858 Removing: /var/run/dpdk/spdk_pid82870 00:21:55.858 Removing: /var/run/dpdk/spdk_pid83028 00:21:55.858 Removing: /var/run/dpdk/spdk_pid83125 00:21:55.858 Removing: /var/run/dpdk/spdk_pid83285 00:21:55.858 Removing: /var/run/dpdk/spdk_pid83394 00:21:55.858 Removing: /var/run/dpdk/spdk_pid84097 00:21:55.858 Removing: /var/run/dpdk/spdk_pid84132 00:21:55.858 Removing: /var/run/dpdk/spdk_pid84162 00:21:55.858 Removing: /var/run/dpdk/spdk_pid84417 00:21:55.858 Removing: /var/run/dpdk/spdk_pid84452 00:21:55.858 Removing: /var/run/dpdk/spdk_pid84487 00:21:55.858 Removing: /var/run/dpdk/spdk_pid84963 00:21:55.858 Removing: /var/run/dpdk/spdk_pid84979 00:21:55.858 Removing: /var/run/dpdk/spdk_pid85231 00:21:55.858 Removing: /var/run/dpdk/spdk_pid85358 00:21:55.858 Removing: /var/run/dpdk/spdk_pid85373 00:21:55.858 Clean 00:21:55.858 11:05:42 -- common/autotest_common.sh@1453 -- # return 0 00:21:55.858 11:05:42 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:55.858 11:05:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.858 11:05:42 -- common/autotest_common.sh@10 -- # set +x 00:21:55.858 11:05:42 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:55.858 11:05:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.858 11:05:42 -- common/autotest_common.sh@10 -- # set +x 00:21:55.858 11:05:42 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:55.858 11:05:42 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:55.858 11:05:42 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:55.858 11:05:42 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:55.858 11:05:42 -- spdk/autotest.sh@398 -- # hostname 00:21:55.858 11:05:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:56.115 geninfo: WARNING: invalid characters removed from testname! 00:22:28.183 11:06:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:28.183 11:06:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:30.084 11:06:16 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:33.369 11:06:19 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:36.664 11:06:22 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:39.214 11:06:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:42.498 11:06:28 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:42.498 11:06:28 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:42.498 11:06:28 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:42.498 11:06:28 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:42.498 11:06:28 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:42.498 11:06:28 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:42.498 + [[ -n 5188 ]] 00:22:42.498 + sudo kill 5188 00:22:42.507 [Pipeline] } 00:22:42.523 [Pipeline] // timeout 00:22:42.529 [Pipeline] } 00:22:42.544 [Pipeline] // stage 00:22:42.550 [Pipeline] } 00:22:42.564 [Pipeline] // catchError 00:22:42.573 [Pipeline] stage 00:22:42.575 [Pipeline] { (Stop VM) 00:22:42.587 [Pipeline] sh 00:22:42.866 + vagrant halt 00:22:47.061 ==> default: Halting domain... 00:22:52.351 [Pipeline] sh 00:22:52.628 + vagrant destroy -f 00:22:55.917 ==> default: Removing domain... 00:22:56.201 [Pipeline] sh 00:22:56.480 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:56.489 [Pipeline] } 00:22:56.502 [Pipeline] // stage 00:22:56.508 [Pipeline] } 00:22:56.523 [Pipeline] // dir 00:22:56.528 [Pipeline] } 00:22:56.543 [Pipeline] // wrap 00:22:56.550 [Pipeline] } 00:22:56.563 [Pipeline] // catchError 00:22:56.573 [Pipeline] stage 00:22:56.575 [Pipeline] { (Epilogue) 00:22:56.589 [Pipeline] sh 00:22:56.872 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:03.492 [Pipeline] catchError 00:23:03.494 [Pipeline] { 00:23:03.507 [Pipeline] sh 00:23:03.787 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:03.787 Artifacts sizes are good 00:23:03.797 [Pipeline] } 00:23:03.814 [Pipeline] // catchError 00:23:03.827 [Pipeline] archiveArtifacts 00:23:03.850 Archiving artifacts 00:23:03.999 [Pipeline] cleanWs 00:23:04.010 [WS-CLEANUP] Deleting project workspace... 00:23:04.010 [WS-CLEANUP] Deferred wipeout is used... 00:23:04.016 [WS-CLEANUP] done 00:23:04.018 [Pipeline] } 00:23:04.034 [Pipeline] // stage 00:23:04.039 [Pipeline] } 00:23:04.054 [Pipeline] // node 00:23:04.061 [Pipeline] End of Pipeline 00:23:04.110 Finished: SUCCESS